The Changelog: Software Development, Open Source - Leading in the era of AI code intelligence (Interview)
Episode Date: February 28, 2024This week Adam is joined by Quinn Slack, CEO of Sourcegraph for a "2 years later" catch up from his last appearance on Founders Talk. This conversation is a real glimpse into what it takes to be CEO o...f Sourcegraph in an era when code intelligence is shifting more and more into the AI realm, how they've been driving towards this for years, the subtle human leveling up we're all experiencing, the direction of Sourcegraph as a result — and Quinn also shares his order of operations when it comes to understanding the daily state of their growth.
Transcript
Discussion (0)
What's up, welcome back.
This is the ChangeLog.
I'm Adam Stachowiak, Editor-in-Chief here at ChangeLog.com, and today I'm sharing a
conversation I had with Quinn Slack, co-founder and CEO of Sourcegraph at the tail end of last year. This was just before the GA
of their coding AI assistant called Cody. This conversation is a real glimpse into what it takes
to be CEO of Sourcegraph in an era when code intelligence is shifting more and more into the
AI realm. How they've been driving towards this for years,
the subtle human leveling up we're all experiencing,
the direction of Sourcegraph as a result,
and Quinn also shares his order of operations
when it comes to understanding the daily state of their growth.
A massive thank you to our friends and our partners at Fly.io,
the home of changelog.com.
It's simple, Launch apps near users.
They transform containers into micro VMs that run on their hardware in 30 plus regions on six continents.
Launch an app for free at fly.io.
What's up, friends? Before we get to the show, I want to share an awesome new announcement from our friends over at Crab Nebula.
Crab Nebula is the official partner of Tauri.
For the uninitiated, Tauri is a toolkit that helps developers make applications for the major desktop platforms using virtually any front-end framework in existence. The core is
built with Rust and the CLI leverages Node.js, making Towery a genuinely polyglot approach
to creating and maintaining great apps. So building applications with Towery has always been
an incredibly joyful experience. However, once the applications are built, distributing them
and rolling out updates has always been cumbersome. This is why we are thrilled to be part of this announcement from our friends at Crab Nebula on
their latest product, Crab Nebula Cloud. The problem really is the cost of distributing
applications, the security and the feedback and analytics. Just thinking about cost alone to
distribute new applications at scale, it can get very expensive when bundle sizes compound with the number of users, which further
compounds with frequency of application updates.
Always be shipping, right?
A 500 meg application distributed across 500 users with nightly updates leads to a total
of around 7.5 million megabytes.
That's 7.5 terabytes transferred in a single month. Now, based on popular cloud
pricing, this could easily lead to a bill in the ballpark of around $90,000. That's a lot of
dollars. More so, distributing updates requires complex cryptography to ensure that an update
is the original safe artifact for users to download, install, and execute. And then collecting
meaningful analytics is more challenging with desktop applications
compared to web-based services,
impacting the ability to make informed updates
and improvements.
So at the heart of Crab Nebula Cloud
is a purpose-built CDN ready for global scale,
ensuring seamless integration with any CI-CD pipeline
and first-class support for GitHub Actions.
And security updates are a first-class citizen.
Leveraging the power of Towery Updater,
Crab Nebula Cloud provides an out-of-the-box update server
that any application can call to check for signed updates
and, if the update is available,
immediately download and apply it in an instant over the air.
And, of course, Towery is open source
and Crab Nebula is a company born out of open
source. And they're giving back to the open source community by giving steep discounts and subsidies
to open source projects built with Tauri. To learn more, get started and check out the docs,
go to crabnebula.dev slash cloud. That's Crab, C-R-A-B, Nebula, N-E-B-U-L-A dot dev slash cloud.
Once again, crabnebula.dev slash cloud. So Quinn, it's good to see you. Good to have you back.
I want to really talk about the evolution of the platform because the last time we talked,
it was kind of like almost pre-intelligence. It was kind of almost still search.
And like just after that, you went buck wild and
had a bunch of stuff happening. And now obviously a whole new paradigm, which is artificial
intelligence, aka AI. But good to have you back. Good to see you. How you been?
Yeah, it's great to be back. I've been good. I think like everyone, it's been quite a whirlwind
over the last four years, over the last year with AI. And we've come a long
way. We talked two years ago and we talked a lot about code search. Was it two years ago?
Two years ago. Gosh. Time flies. So a lot changed in two years. I mean. Yeah, there's been about
10 years in the last two years through the pandemic and now AI. And we have grown a ton
as a company, our customer base and all that.
And yeah, two years ago, we were talking about code search.
And that's what we had built.
And we still have code search that looks across all the code and understands all the calls,
the definitions, the references, the code graph, and all those great things.
And we've got a lot of great customers on that.
You know, like a bunch of the FANG companies
and four of the top 10 US banks and Uber and Databricks
and governments and Atlassian and Reddit and so on.
But it's not just code search anymore.
The world has changed for software development
in the last year so much.
What is it like being CEO of a company like that?
I mean, you're a founding CEO, so this isn't like, oh, you inherited this awesomeness. You built this awesomeness. Like
how does it feel to be where you are? Sometimes it is exciting and scary to realize that I as CEO
have to make some of these decisions to go and build a new product to change our direction.
And I feel so lucky that I have an amazing team and that people are on board and people are
bringing these ideas and need to change up. But it's definitely weird. I mean, it's one thing to
try a new side project that is dabbling with some of these new LLMs. It's another thing to shift a 200-person company to going and building that.
But as soon as you do that, just the progress that you see, it's so validating.
And then obviously hearing from users and customers, it's so validating.
So it's been, I think, a whirlwind for everybody.
When you make choices like this, do you have to go to...
I know you're funded, so you probably have a board, right?
So you have other people who help you guide this ship. So it's not, hey, I'm CEO and we just do whatever I want. You also have Byung-Loo, your founding
co-founder, CTO. Very keen on Byung, I've known him for years.
I want to go backtrack a little bit, but I want to stay here for a minute
or two. When you make a choice to go from code search to code intelligence to now introducing Kodi, your product
for CodeGen for, and code understanding as well, I mean, so much more. It's got a lot of potential.
When you make a choice like that, how do you do that? Do you have to go to the board? What's that
like? Give me an example of what it takes to sort of, you know to change the direction of the ship, so to speak.
Yeah.
If you go back to the very founding of Sourcegraph, we decided on a problem to solve, which is
big code.
It was way too damn hard to build software.
There's so much code.
There's so many devs.
There's so much complexity.
And back when we started Sourcegraph 10 years ago, we felt that, you know, BDM-sized companies felt that now you start a brand new project and it brings in like 2,000 dependencies and you have to build on all these super complex platform stuff.
It's getting so much more complex.
So we agreed on a problem and we got our investors, we got our board, we got our customers, our users, our team members, all aligned around
solving that problem and not one particular way that we solve it. And if you go back to our plan,
it actually, in our very first seed funding deck, it talks about how first we want to
build the structured code graph, and then we want to do intelligent automation. That's IA. I think we
probably would have said AI, except back at the time, if you said AI, people thought that you
were completely crazy. Right. For real. Yeah. Yeah. You know, this is, it's unfolding. I won't
say exactly like we didn't have a crystal ball back then, but it's unfolding roughly as we
expected. And we knew that to do more automation and code, to take away that
grunt work from developers so they could focus on the actual problems and the stuff they love, that
you needed to have the computer have a deep understanding of the code base. It couldn't
just all be in devs' heads. So it was no big surprise to our board or our team that this
was something that we would do, that we would go and build
our code AI.
And it was also not some complete luck of the draw that we found ourselves in a really
good position to go and build the most powerful and accurate code AI.
So none of this is coincidental.
But when do we do that?
And I think if we had started to do that, say, in 2018, where there were plenty of other
attempts to do ML on code, I think that we would have failed because the fundamental
underlying technology of LLMs was just not good enough back then.
And the danger there is if we started to do it in 2018 and we failed, we might have, as
an organization, learned that stuff doesn't work. And then we would
have been behind the ball when it actually did start to work. So getting the timing right
was the tough part. And I actually think that we probably waited too long because we could have
brought something to market even sooner. But it's still so early in terms of adoption of code AI. And, you know,
even less than 0.5% of GitHub's users are using GitHub Copilot. So it's still very early. Most
devs are not using even the most basic code AI that exists. So it's still very early. But, you
know, getting the timing right and making a big shift starting back last December. That was
when we started to see Cody really start to do some interesting things. That felt early to a lot
of people on the team. And it took a lot of work to get the team on board and to champion what the
team was doing that was working and to shift some of the things that we could see were not going to
be as important going forward. What a shame though, right? That less people are using
code AI related tooling.
I think it's like
I'm not sure what it is exactly happening
because there's this idea that
it might replace me
and so therefore I just resist it.
I'm just assuming that's
probably some of the case for devs
out there because I've used it
and I think it's super
magical and I'm not trying to generate all the things I'm trying to move faster with you know
one more point of clarity if not an infinite point of clarity that can try something a thousand times
in five minutes for me so I don't have to try it a thousand times in a week or two you know whatever
it might be and that might be you know hyperbole to some, you know, whatever it might be. And that might be, you know,
hyperbole to some degree, you know, but it's pretty possible. I just wonder like, why,
why are people not using this more frequently? Is it accessibility? Is the, you know, is the access not evenly distributed? What do you think is happening out there? What's the sentiment out
there that of why more tooling like this isn't being used?
Well, I think this applies even to ChatGPT.
ChatGPT, it's amazing.
It changed the world.
It's mind-blowing.
It can do incredible things.
And yet, you ask the average person, how often are they using ChatGPT in their everyday life
or their everyday work week. And the answers I usually get are maybe one
or two times you hear these stories of people that say, I ask it to write emails for me. And
what it writes is 10 times too long. And the technology is there, the promise is there.
But in terms of actually something that is so good and understands what someone is doing,
understands the code they need to write for developers, that's still not completely there
yet.
And at the same time, the promise is there.
So I really want to make sure that we as an industry, everyone building code AI, that
we level with developers out there with what exists today, what works well today,
what doesn't, what's coming in the future, and not lose credibility by overhyping this
whole space.
I think that's a huge risk.
And I actually look at self-driving cars.
10, 15 years ago, you started to hear about autonomous vehicles.
There was so much hype.
People thought, are humans even going to be driving in the year
2020? And back when, you know, clearly we are. And some people are kind of jaded by all that hype
and they just dismiss the entire space. And yet in San Francisco here, there's two companies
where you can get a self-driving taxi. And that is amazing. That is mind-blowing.
The progress is real. It was a little slower than people thought if you were just reading the hype.
But I think that most of the industry experts would have said, yeah, this is about the rate
of progress that we'd expect. And so we don't want that kind of mistake to happen with Code.ai.
We want to be really clear that it's not replacing developers. Those tweets you see where it's like, oh, you fire all your junior developers,
you can replace them with whatever this AI tool someone is shilling. Those are completely false,
and those detract from the incredible progress that we're seeing every single day with Code AI
getting better and better. The autocomplete that Code AI can do is really powerful. I think that
could probably lead to a 20% boost in developer productivity, which is really meaningful.
But then having it write entire files, having it explain code, understand code,
we're working on that with Kodi. And Kodi does a pretty good job of that. It's really helpful.
And you see a lot of other work there. That is really valuable. And it doesn't need to be at
the point where it's Skynet for it to be changing the world. Yeah, for sure. Can we talk about some subtle human leveling up that's
practical for chat GPT? I mean, I know it's not Cody. Do you mind riffing a little bit? So
last night, my wife and I, we were hanging up pictures of our beautiful children. You know,
we took pictures of them when they were less than one week old. And then we have pictures of them in the same kind of frame at like their current ages and one seven and one three.
So it doesn't really matter about the age.
They're just not one week old anymore.
So you have this like sort of brand new version of them and then like current version to some degree.
And it's four pictures because we have two sons and we want to hang them on the wall.
And my wife was trying to
do the math and we can obviously do math it's not much it's like an eight foot wide wall we want to
put them in a grid of four with even spacing all that good stuff I'm like babe we should ask I'm
like I'm more versed in this than she is not so much that she doesn't use it often she just like
doesn't think to and I think that might be the case of like why it's not being more widely used
yeah is they don't think to use this and I'm like i don't want to use it it's a work calculator i want to use it
it's a work calculator i don't want to think about this problem myself i don't want to do the math
i could just tell the problem my space my requirements and it will just it will tell
me probably too much but it'll give me a pretty accurate answer i'm like let's just try it and
she's like okay and so i so I tell it, hey,
chat GPT, I have this eight foot five inch wall in width and I want to have these pictures laid
out in the grid. They're 26 inches squared, 26 wide, 26 tall. And I want to have them evenly
distributed on this wall in a four grid. It gave me the exact answer, told me exactly where to put
them at. We did it in five minutes rather than like doing the math and making a template and,
you know, writing all these things on the wall.
It was like so easy because it gave us the exact right answer.
That's cool.
That's awesome.
That to me is like the most uniquely subtle human way to level up.
And I think there's those kinds of problems in software that are being missed by developers every single day to not X their day.
One X, two X, five X, whatever it might be.
If I could do a task in five minutes, not because it does it for me, but it helps me think faster and get to the solution faster.
Then I want to do that versus doing it in 50 minutes or an hour or so.
What do you think about that?
Yeah.
So when you asked it that,
did it give you the exact right answer on the very first reply?
Yes.
Yes, it did.
That's awesome.
Yeah.
I found a way to talk to it that it does that.
And I don't know if like it's a me thing,
but I get pretty consistently accurate answers.
Now it also gave me all the but I get pretty consistently accurate answers.
Now, it also gave me all the theory in a way, too.
The combined width of this and that and two times this and whatever that.
I don't really care.
I just want to know the end, which says, so if you want sixes in between each picture frame, you should do this and that and this and that.
Like it gave me the ending.
Like just skip to the ending.
Just give me the good parts. Yeah.
But I'm willing to like just wait literally maybe 10 seconds extra. That's cool with me.
Yeah. Well, that's incredible. And I think that there's probably incredible. Yeah. There's so
many things like that in your everyday life where you could use it and it probably won't get a
hundred percent correct, but I mean, what an amazing time to be living where that new technology
is suddenly possible and it's not trickled down to all the
things that it can change. And when you think about that underlying capability, this kind of
brain that can come up with an answer to that question, how do we make it so that it can do
more code? The way that a lot of people think about code AI is autocomplete the next line or
few lines. And that's a really good problem for AI because
just like with your picture framing example, the human is in the loop. The human is reviewing a
digestible, reviewable amount of code, of AI suggested code. And so you're never having to
do things that the human cannot look at. If the AI told you, hey, if you want to put pictures up on the wall,
first crack some eggs and put them on the stove,
you'd be like, that makes no sense.
And you would have caught it.
So that human in the loop is really important.
The next step, though, and how we get AI beyond just a 20% productivity enhancement
is how do we have the AI check its own work? And I don't mean the LLM. I
mean, how do we have an AI system? One very simple example is right now, any AI autocomplete tool
will sometimes suggest code that does not type check or does not compile. Why is that? That
should no longer be the case. That's one of the things that we're working on with coding. So don't
even suggest code that won't type check.
How can you bring in context about the available types in the type system so that it will produce
a better suggestion and then filter any suggestions that would not type check?
And in some cases, then go back to the LLM, invoke it again with additional constraints.
And then why stop at type checking?
Let's make it so you only suggest code where the
tests pass, or you suggest code where the tests don't pass, but then you also suggest an update
to the test because sometimes the tests aren't right. And it's all about all the advances in
the future with code AI that I think are critical for us to make it so amazingly valuable are about
having the AI check the work and bring in its real world intuition. So it's not relying on that human in the loop. Yeah. I guess my concern would be latency,
right? Like if you've got to add not just generation, but then checking, linting,
et cetera, testing, correctly testing, you know, canceling out, like you got a lot more
in that buffer between the prompt, which we're all familiar with, to get the response and the ending of the response.
Like I always wonder, like, why does it take ChatGPT in particular time to generate my answer?
Like, is it really thinking and it's giving me like the stream of data on the flyer?
Is there some sort of, is that an interface that's like part of usability or part of UX? And I just wonder in that scenario that you gave, would the latency affect the user experience?
Yeah, absolutely. Of course, right? Yeah. You know, we have incredibly tight latency
budgets. We look at getting the P75, 75th percentile latency well below 900 milliseconds.
And yeah, once you start invoking the LLM multiple times to check its own work, to go back and redo
the work, once you start invoking linters and type checkers, I think we've all been in a situation
where we hit save in a file in our editor and we see, oh, waiting for the linter to complete.
Sometimes that can take a few seconds in big projects.
This requires, I think, a rethinking of a lot of the dev tooling because in the past
it was built for this human is editing a single file at a time.
It's interactive and it's in CI, but that's where latency is not that sensitive.
But I look at just the difference between like bun running tests in a JavaScript project versus
another test runner and bringing that down to 200, 300 milliseconds instead of five or 10 seconds or
more is really critical. I look at things like rough, rewriting a Python linter in Rust to make
it go so much faster. I mean, I wish something
like that existed for ESLint. And we need to bring the latency of all these tools that devs use in
that edit loop down by several orders of magnitude to make this possible. But I think the reward,
the pot of gold at the end of the rainbow, if we do all that is so great because it will enable AI
to take off so much of the grunt work that we ourselves do. So I don't know if that's the motivation behind some
of these linters and new test runners and so on, but I love that those are coming out there because
that will make this fundamentally possible. What's up, friends?
This episode is brought to you by one of my good friends,
one of my best friends, actually,
one of our good friends, Tailscale.
And if you've heard me on a podcast,
you've heard me mention Tailscale several times in unsponsored ways because I just love Tailscale. And I reached've heard me on a podcast, you've heard me mention Tailscale several times
in unsponsored ways, because I just love Tailscale. And I reached out to them and said,
hey, we're talking to a lot of developers. I love Tailscale. I'd love to have you guys sponsor us.
And they reciprocated. So what is Tailscale? Well, Tailscale is a programmable networking
software that is private and secure by default, which makes it the easiest way to
connect devices and services to each other wherever they are. You get secure remote access
to production, databases, servers, Kubernetes, VPCs, on-prem stuff, anything. It's fast, like
really, really fast. And you can install it anywhere. Windows, Linux, BSD, Mac OS, iOS, Android.
It is an easy-to-deploy, zero-config, no-fuss VPN.
It is zero-trust network access that every organization can use.
So, what can you do with Tailscale?
You can build simple networks across complex infrastructure. They have ACL policies to securely control access to devices and services with their next-gen network
access controls. You can replace legacy VPN infrastructure in just a few minutes. You can
use device posture management to granularly restrict access to resources based on a wide
range of attributes like OS version, user location, and more.
You can save time with a trusted and proven networking solution that just works.
You can transform networking security with a modernized set of solutions built
on revolutionary protocols designed for today's mobile and multi-cloud era.
You can securely connect to anything, no matter what operating system,
hardware type, or configuration is in place, such as your GitHub runner to a database on-prem. You can authenticate without authentication
using Tailscale's app connectors. You can send files securely to any node on your Tailnet using
Taildrop. You can code from your iPad even with Tailscale and VS Code Server. There's just so
much you can do with Tailscale. Like the limits are literally limitless.
And that's why I love Tailscale.
I've got 28 machines personally that I manage on my tail net.
And if you've heard me talk on the podcast,
you've heard me mention Tailscale at least once or twice.
But today you can try Tailscale for free up to 100 devices like me
and three users for free at Tailscale.com.
No credit card required.
Again, it's free.
Up to 100 devices and three users,
all for free at tailscale.com.
No credit card required.
Have fun. so recently at all things open uh jared conducted a panel with emily freeman and james q quick
and really one of the questions he asked was you call it grunt work in this in this scenario
and jared argued that maybe that's the joy work. Does AI steal the joy work,
Quinn? Like, are we, like some of this stuff is fun and some of it is just a means to an end.
Like not all developers really enjoy writing the full function themselves. And some of them really
do because they find coding joy. What are we doing here? Are we stealing the joy?
I love nothing more than having six hours of flow time
to fix some tech debt, to do a really nice refactor.
And as CEO, sometimes that's the best code
for me to be writing,
because I do love coding rather than some new feature,
some new production code.
So yeah, I totally feel that.
And at the same time,
I choose to do that by writing in TypeScript, by using a GUI editor,
you know, Emacs or VS Code. I choose to do that by writing in Go. I'm not choosing to do that by
going in and tweaking the assembly code or, you know, we're not using C. And so I've already
chosen a lot of convenience and quality of life improvements when I do work on TechDev.
It's not clear to me that
the current level is exactly right. I think that you can still have a lot of the really
rewarding puzzle solving, the fun parts of the grunt work and have the AI do the actual grunt
of the grunt work. I think it's different for everyone. But as we get toward AI starting to,
to be clear, it's not here yet. But as we work as an industry toward AI, being able to take over more entire programming tasks, like build a new feature, then we're going to have to do both the grunt work and the fun work from the programmer.
And if someone only wants to use half of that, that's totally fine.
My co-founder, Biang, he uses Emacs, but in a terminal, not in GUI. So, you know,
it's a free country and devs can choose what they want. That's right. Okay. I guess I was saying
that more as a caution to you because half of the audience cringed when you said grunt work and the
other half was like, you know, you're taking my joy away. And I was like, some of them are happy.
And then some of them were like, let's not use a pejorative towards the work we love.
You know what I mean? Well, I think grunt work is different for each person. I think a lot of people
would consider the grunt work to be all the meetings and all the reading of documents and
the back and forth and the bureaucracy of their job. They hate that part and they just love coding.
And I say, we need AI in the future to be able to digest all that information that comes
from all these meetings and to distill the requirements. So let the AI do that for them.
And then they can just have a glorious time coding. And we used to joke with Sourcegraph that
we hoped that we'd, Byung and I would create Sourcegraph. It would be so damn good that we
could just retire and spend all our day coding in some cave. And look, I totally feel that. We want to bring that to everyone. And if
they want to do that, then they should be able to do that. Yeah. So two years ago, we would not
have opened this conversation up with a discussion on artificial intelligence. You know, like two
years ago, you were talking about like, that was the last time you did the work, not me. I didn't even look at the last time. I know it was not yesterday.
I knew it was not last year. I just wasn't sure how far back it was.
What has changed with Sourcegraph since then? You've grown, obviously, as a
company. You've got two new
pillars that you stand on as a company. You had code search was the origination
of the product. And you you know you sort of evolve that into more of an intelligence platform which i
think is super wise and then obviously cody and code generation and code understanding
artificial intelligence llms all the good stuff what has changed really from a company level
like what size were you back then can you share any attributes about the company?
You know, how many of these Fang
and, you know, large enterprise customers
did you have then versus now?
Did they all come for the Kodi
instead of the Sourcegraph?
Or what do they, or is it all one big meatball?
How do you describe this change, this diff?
Yeah, two years ago, we were CodeSearch.
And that's like a Google for all the code in your company. It's something that you can use while coding to see how did
someone else do this? Or why is this code broken? How does this work? You can go to find references,
go to definition across all the code. And at the time, we were starting to introduce more kinds of
intelligence, more capabilities there. So not just finding the code, but also fixing the code with batch changes. With Code Insights,
you could see the trends. For example, if you're trying to get rid of some database in your
application, you could see a graph where the number of calls to that database is going down.
And hopefully the new way of doing things is going up. So all these other kinds of intelligence and that stuff is incredibly valuable.
Millions and millions of devs love code search and all these things.
And with code search, that was about feeding that information to the human brain, which
is really valuable.
And the analogy that I would say is ChatGPT, again, changed the world, but we all use Google search or whatever
search you use way more than you use ChatGPT today. And yet everyone has a sense that something
like ChatGPT, that kind of magic pixie dust will be sprinkled on search and we'll all be using
something that's kind of in between. ChatGPT is probably not the exact form factor of what we'll be using. Google Search
circa two years ago is not what we'll be using, but there will be some kind of merger.
And that's what we've, this journey that we've been on over the last couple of years,
taking code search, which fundamentally builds this deep understanding of all the code in your
organization. And we got a lot of reps under our belts by making it so that humans find that
information useful. Now, how do we make that information useful to the AI and then make that
AI ultimately useful to the human? So how can we use this deep understanding of code to have Cody,
our code AI, do much better autocomplete that has higher accuracy than any other tool out there?
How can we have it use that understanding of how you write tests throughout your code base so that it will write a better new test for you using your framework, your conventions?
How do we make it really good at explaining code? Because it can search through the entire code base
to find the 15 or 20 relevant code files. So we're building on this foundation of code search.
And what I'll say with code search is I use it all the time. I think every dev would
do well to use code search more. It's so good at finding examples and reading code is the best way
to up level as a software engineer. But Kodi and Code.ai is something that every dev thinks that
they should be using. So given that they solve so many of the same problems, this problem that
caused us to found the company of it's so damn hard to build software. It's really hard to understand code. They both sell the same problem.
And if what people want is Kodi more than CodeSearch, well, CodeSearch still exists and
it's growing and it's the foundation for Kodi. But we're going to be talking about Kodi all day
because that's what people are asking for. And that's what we hear from our users. We see a lot
of people come in for Kodi and then they also realize they love code search, but I think Kodi is going to be the door in. It's so easy to get started and it is
just frankly magical. I think everyone can speak to that magic that they see when AI solves their
problem, like you did with that picture frame example. Yeah. Can you speak to the ease of which
it was to sell Sourcegraph, the platform, two years ago to how easy it is to sell it now?
You kind of alluded to it to some degree,
but can you be more specific?
Yeah.
Two years ago would have been 2021, the end of 2021,
which was the peak of the market,
the peak of kind of everything.
And I think there's been a lot of big changes in how
companies are hiring software engineers and budget cuts and so on. So we've seen a big change
over the last two years. CodeSearch has grown by many, many times since then. But what we saw is
with companies realizing, hey, maybe we're not going to be growing our engineering
team at 50% each year.
We saw a lot of developer platform, developer happiness, developer experience initiatives
get paused in favor of cost cutting.
How can we figure out what are the five dev tools that we truly need instead of the 25
where in the past, if a dev loved something,
then yeah, we'd go and plop down a bunch of money. And so we were well-positioned because
we had such broad usage and because a lot of companies looked at us as a platform. They
built stuff against our API and every team used it. We were in a good position there.
I think though, if AI had not come out about a year ago, then I don't know what
the dev stack would look like. I think you'd have a lot of companies that realized, hey, we've been
keeping our eng hiring really low for the last two years. You know, I'm not sure now companies see AI
as a way to get as much as they were getting in the past, but with less developers. And developers
see it as a way to improve their productivity. And I think the missing piece that we're not
fully seeing yet is there's a lot of companies out there that would love to build more software,
but were just unable to because they didn't know how to. They were not able to hire a critical
mass of software engineers. They were not in some of the key engineer hiring markets.
Developers are too
expensive for them to hire. But all these other companies that would have loved to build software,
they were just bottlenecked on not being able to find the right engineers. I think that AI is going
to help them overcome that. And you're going to see software development be much more broadly
distributed around a lot of companies. And that is what's exciting. So looking at the overall
software developer market, around 50 million professional
developers, around 100 million people, they write code in some way in their job, including
like data analysts, fully expect that number to go up and fully expected pretty much every
knowledge worker in the future is going to be writing some code in some way.
So I'm not pessimistic on, you know, the value of learning how to code at all.
But there's just been massive change
in how companies are seeing software development
and the structure of teams over the last couple of years.
I think when we talked last time,
you were saying either exactly or in a paraphrasing way
that it was challenging to sell code search,
that it was not the most intuitive thing to offer folks. You obviously,
founders, understand how deeply it was useful because you worked inside of Google. You saw
a different lens towards code search. And most people just saw Command F or even Command Shift
F as just something that was built in rather than something that you went and bought and
stood up separately as a separate instance that had this other intelligence.
And that was hard to sell.
However, code search that is being understood by an LLM,
Kodi, is a lot easier to offer
because you can speak to it
very much like we've learned
how to chat with artificial intelligence
to generate and whatnot like that.
So I'm curious, even when we were done talking on the last time on Founders Talk,
you weren't ready to share this intelligence side, which was also the next paradigm.
I think this intelligence factor, obviously code search gives you intelligence
because you can find and understand more,
but it was the way that you built out insights and just different things like that
that allowed you to not only manually, like a caveman or cave person type in, you know,
all these things you can into search. You could just sort of form a intuitive graph towards,
like you mentioned before, the cost to a database going down and cost to the new database going up.
And you can see the trend line towards progress clearly
and even share that dashboard with folks
who are not in development, in engineering.
You can share them with comms or marketing or CEOs
or whomever is just not daily involved
in the engineering of their products.
And I'm just curious, give me really specifics,
how easy it is to sell now
because Kodi just makes the accessibility, the understanding of understandability of what Sourcegraph really want to deliver so much easier.
Yeah, Kodi does make it so much easier.
And yeah, going back two years ago, we had a fork in the road.
We could have either made just code search something that clicked with so many more developers and overcome that kind of question,
which is, you know, I've been coding for 10 years. I haven't had code search. I have it in my editor.
Why would I need to search across multiple repositories? Why would I need to look through
different branches? Why would I need kind of, you know, global fine refs and go to definition?
Why would I need regex search that works? We got a lot of questions like
that. We could have just doubled down on that and tried to get, you know, for us, way more devs
using it for open source code and within our customers, 100% of every developer and all of
our customers using code search. We could have done that. What we decided to do was go deeper
into the intelligence to build things that were exposed as more power
user tools like the Code Insights. Code Insights is something that platform teams, that architects,
that security teams, managers, they love. It has incredible value for them. But for the average
application engineer, they're not really looking at Code Insights because they're not planning these
big code base wide refactors. Same with batch changes. Platform teams love it. People
that have to think in terms of the entire code base rather than just their feature, they need it.
And I think we got lucky because given that right around that time, that's when developer hiring
began to really slow down. It was really helpful for us to get some really deep footholds in these critical decision
makers, just from a sales point of view in companies to have like very deep value instead
of kind of broad diffused value. So that ended up being right. It also ended up being right in
another way, which is we got deeper in terms of what does Sourcegraph know about your code base?
And that was valuable for those humans over the last couple of years,
but it's also incredibly valuable now because we have that kind of context that can make our
code AI smarter. But I do really lament that most devs are not using code search today. I think it's
something that would make them much better developers. And there's absolutely part of me
that wishes I could just go have 50 amazing
engineers here work on just making it so that code search was so damn easy to use and solved
every developer's problem. Now we're tackling that with Cody because we got to stay focused.
And to your point, they do solve the same problem. And, you know, if you're with code search,
if you're trying to find out how do I do this thing in code, code search will help you find
how all of your other colleagues did it.
Cody will just look at all those examples and then synthesize the code for you.
And so, you know, there's so much similarity and we are just finding that Cody is so much easier to sell.
But we did have a cautionary moment that I think a lot of other companies did back in February to May of 2023. This year,
if you said AI, if you said our product has AI, literally everyone would fall over wanting to
talk to you. And they'd say, my CEO has given me a directive that we must buy AI. We have this big
budget and security is done. Legal is done. We have no concerns. We want it as soon as possible.
And it didn't matter if the product wasn't actually good.
People just wanted AI.
And that, I think, created a lot of distortions in the market.
I think a lot of product teams were misled by that.
I'm not saying that the customers did anything wrong.
I think we were all in this incredible excitement.
And we realized that we didn't want to get carried away with that. We wanted to
do the more boring work, the work of take the metric of accuracy and DAUs and engagement and
overall a lovable product and just focus on that. We did not want to go and, you know, be spinning
up the hype. So we actually, you we actually really pulled back some of the stuff and
we level set with some customers that we felt wanted something that nobody could deliver.
And that was one of the ways that we came up with these levels of code AI,
taking inspiration from self-driving cars. We didn't want the hype to make it so that
a year from now, everyone would become disillusioned with the entire space.
So definitely a big learning moment for us.
And if there's an AI company out there that is not looking at those key user metrics that have always mattered, the DAU, the engagement, the retention, the quality, then you're going
to be in for a rude awakening at some point because exploratory budgets from customers
will dry up.
Well said.
I think it's the right place at the right time, really.
I would say the right insight a long time ago to get to the right place,
to be at the right place at the right time,
because everything that is Kodi is built on the thing you said you lament
that developers would use.
It's built on all the graph and all the intelligence that's built by the
ability to even offer a code search at a speed that you offer it.
And then obviously your insights on top of that.
So it's like you took,
it's like having the best engine and putting it in the wrong car and nobody
wants to buy the car.
And then suddenly you find like this shell that performs differently.
Maybe it's got better.
I don't know.
Just in all ways, it feels better to use.
And it's more just straightforward to use.
You still have the same engine.
It's still the same code search, but now it's powered by something that you can interact with in a meaningful way. Like we've learned to use with having a humanistic conversation with software running on a machine.
You know, I think that's just such a crazy thing to be.
That's why I wanted to talk to you about this because you've had,
I mean, people will think,
some people think that Sourcegraph was born a year or two ago
that know your name, right?
And you've been like on a decade journey.
I don't even know what your number is.
It's like it's getting close to a decade, if not past a decade, right?
Yeah, we started Sourcegraph a decade ago.
And so I've been a fan of y'all's ever since then. And for a long time, just a fan hoping
that you would get to the right place because you provided such great value that was just hard to
extract, right? The ability to extract the value from Sourcegraph is easier thanks to Kodi than it
was through code search because of obvious things we just talked about.
That's an interesting paradigm to be in, a shift to be in
because you're experiencing that, I'm assuming to some degree,
a hockey stick-like growth as a result of the challenges you faced earlier
that now are diminished to some degree, if not all degrees,
because of the ease of use
that Kodi and things like Kodi are. Yeah. And, you know, CodeSearch, when we started bringing
that to market in 2019, that was a hockey stick. But now we realize that that was a
little league hockey stick and that now this is the real hockey stick. Right. And I think that I've been reading,
I love reading history of, you know,
economics and inventions and so on.
And I've been reading about the oil industry.
The oil industry got started when someone realized,
oh, there's oil in the ground and it,
this kerosene can actually light our homes much better
and much more cheaply than other kinds of oil from
whales, for example. And initially, oil was all about illumination, make it so that humans can
stay up after 6 p.m. when the sun goes down. And that was amazing. But that's not how we use oil
today. Oil is just, you know, this energy that powers everything, that powers transportation, that powers manufacturing,
that powers heating and so on. And there were people that made fortunes on illumination oil,
but that pales in comparison to the much better use of oil for our everyday lives. And now,
of course, you have renewables and you have non-oil energy sources.
But for a long time, we saw that that initial way of using oil was actually not the most valuable
way. So seeing that this just happens over and over, that a new technology is introduced and
you're not quite sure how to use it, but you know that it's probably going to lead to something.
And that's how we always felt with code intelligence. And that's, you know, us getting new intelligent automation
is so exciting for us now. One of the really exciting things we're seeing is because
so many people are shocked that these LLMs, you speak to them like humans. They seem to feel much
more human-like than what we perhaps anticipated AI would be like. We think of AI from movies as being very robotic, of lacking the ability to
display empathy and emotion and thought processes. But actually, that is exactly how we see LLMs.
I've seen some studies even that show that LLMs can be better at empathy than a doctor with a
poor bedside manner, for example. And for us, this is absolutely critical because all this work we put into bringing information about code to the human brain, turns out that AI needs that same
information. The AI, well, you know, the human, if you started a new software engineering job,
you get your employee badge, you go read through the code, read through the docs. If there's an
error message, you'll look at the logs, you'll go in team chat. You'll join meetings. That's how
humans get that information. And AI needs all that same information. But the problem is you cannot
give AI an employee badge and have them roam around the halls and stand at the water cooler.
That's just not how AI works. Yet. So we just happen to have broken down all that information
into how can we think of it programmatically. And now that's how we teach it to Cody. I always throw the word yet in there whenever I talk about status quo
with artificial intelligence or innovation, because my son, he's three, he loves to watch
the robot dance video, he calls it. And it's actually, what is it, Boston Dynamics. It's that
Do You Love Me song. And they have all the robots dancing to it. And I'm just thinking like, video he calls it and it's actually was a boston dynamics it's that do you love me song and they
have all the robots dancing to it and i'm just thinking like when is the day when it's more
affordable or to some degree more affordable to produce that kind of humanoid like thing that
can perform operations now i know it's not gonna it's probably not advantageous to buy an expensive
boston dynamics robot to stand at your water cooler, but that's today,
right? Like what if 50 years from now, it's far more affordable to produce those and they're in
mass produced with, you know, the things that are completely separate from it in the future.
Maybe it might make sense eventually to have this water cooler like scenario where you've got a
robot. That's the thing that you're talking to. I'm just saying yeah that's why i said the word yet yeah yeah and you got to have this humility
because who knows What's up, friends?
This episode is brought to you by ImageProxy.
ImageProxy is open source and optimizes images for the web on the fly.
It uses the world's fastest image processing library under the hood, LibVeeps.
It makes websites and apps blazing fast while saving storage and SaaS costs. And I'm joined by two guests today, Sergei Alexandrovich, author, founder, and CTO of
ImageProxy, and Patrick Byrne, VP of Engineering at Dribbble, where they use ImageProxy to
power the massive amounts of images from all of Dribbble.com.
Sergei, tell me about ImageProxy.
Why should teams use this?
Everyone needs to process their images.
You can't just take an image
and just send it to users' browsers
because usually it's megabytes of data.
And if you have lots of images like Dribbble does,
you need to compress them.
You need to optimize your images
and you need them in the best quality you can provide.
That's where ImageProxy shines.
Very cool. So Patrick, tell me how Dribbble is using ImageProxy.
Being a design portfolio site, we deal really heavy in all sorts of images from a lot of
different sizes, levels of complexity. And when we serve those images to the users,
those really have to match exactly what the designer intended. And the visitors need to receive those images in an appropriate file size and dimensions, depending
on whatever their internet connection speed is or their device size is. And that was just a
constant struggle really to really thread that needle throughout the course of the platform,
using a handful of different tools in maintaining that right balance of a high degree of fidelity,
high degree of quality without sacrificing the visitor's experience.
And when we were exploring using image proxy,
we were able to spin up using the open source version of the product,
a small ECS cluster to just throw a few of those really tough cases
that went through our support backlog,
looking at some of the cases of people reporting.
And almost to a T, we aced every one of those cases.
Wow. So it seems like imageProxy was really impressive to you.
Yeah, Adam, I just have nothing but positive things to say about using ImageProxy.
The documentation is crystal clear.
Out of the box, it does everything you need it to.
Tweaking it to meet your specific needs is incredibly easy.
It's wicked fast.
It deploys real simple.
And our compute costs have gone down over the open
source tools that we've used in the past. Even including the ImageProxy Pro license,
it still costs us less to use and gives our visitors a better experience. So as an engineer,
I like using it. And as an engineering manager, I like what it does to our bottom line.
So ImageProxy is open source and you can use it today. But there's also a pro version
with advanced features.
Check out the pro version
or the open source version
at ImageProxy.net.
The one thing I love so much about this
is that no matter which you choose,
the open source route
or the advanced features
and the pro version,
you use the same way to deploy it.
A Docker image,
one that is from Docker Hub that everybody can use,
it's open source, or one that's licensed differently for those advanced features.
That's the same delivery mechanism via Docker Hub. It's so cool. Once again, imageproxy.net.
Check out the demo while you're there and check out the advanced features for the pro version.
Once again, imageproxy.net.
Okay. So let's talk about, let's talk about winning. Can we talk about winning for a bit?
So if you're on this little league hockey stick with search, and then now it's about winning can we talk about winning for a bit so if you're on this little league hockey stick with search and then now it's obviously major league hockey stick and i think you're head nodding to that to some degree if not voicingly uh affirming that yeah when i search
github copilot versus because i think copilot has a brand name because they were one of the first AI code focused tools out there.
Now, obviously, ChatGPT broke the mold and became the mainstream thing that a lot of people know about.
It's not built into editors directly.
It might be through GitHub Copilot and Copilot X.
But even when I search GitHub Copilot X or just Copilot by itself versus,
Kodi does not come up in the list.
Tab9 does and even VS Code does and that might be biased to my Google search.
And this is an example where I'm using Google versus ChatGPT to give me this versus.
Now I might query ChatGPT and say, okay, who competes with GitHub Copilot?
And you might be in that list. I didn't
do that exercise. What I'm getting at is of the lay of the land of code AI tooling,
are you winning? Who is winning? How is it being compared? What are the differences
between them all? Yeah. Copilot deserves a ton of credit for being the first really good code AI tool in many ways. And I think at this
point, it's very early. So just to put some numbers to that, GitHub itself has about 100
million monthly active users. And according to one of GitHub's published research reports,
that's where I got that 0.5% number from, they have about a million yearly active users. And
that's the people that are getting suggestions, not necessarily accepting that even.
So, you know, a million yearly actives, what does that translate into in terms of monthly
actives?
That's a tiny fraction of their overall usage.
It's a tiny fraction of the number of software developers out there in the world.
So I think it's still very early.
And, you know, for us, for other code AI tools out there, I think people are taking a lot of
different approaches. There's some that are saying, we're just going to do the cheapest,
simplest autocomplete possible. And there's some that are saying, we're going to jump straight to
trying to build an agent that can replace a junior developer, for example. I think that you're seeing
a ton of experimentation. What we have, which is unique, is this deep understanding of the code, this context.
And another thing that we have is we have a ton of customers where SourceGrav is rolled
out over all of their code and working with those customers.
I mean, I mentioned some of the names before.
These are customers that are absolutely on the forefront that want this code AI, and it's a goldmine for us to be able to work with them. So, you know,
when you look at what's our focus, it's how do we build the very best code AI that actually solves
their problem? How do we actually get to the point where the accuracy is incredibly high,
and we see Cody having the highest accuracy of any Code AI tool based on completion acceptance rate,
how do we get to the point where every developer
at those companies is using Kodi?
And that's another thing where we've seen
there's a lot of companies where,
yeah, they're starting to use Code AI
and five devs over here use Copilot,
five over here use something else.
But none of this has the impact that we all wanted to have
until every
dev is using it. As we learned with code search, it's so important to make something that every
dev will get value from, that will work for every dev, that will work with all their editors,
that will work with all their languages. And that's the work that we're doing now.
So I don't know the particular numbers of these other tools out there. I think that everyone has
to be growing incredibly quickly just because of the level of interest. But it's still very early and most devs are up for grabs.
I think the thing that's going to work is the code AI that every dev can use and instantly see
working. And what are they going to look at? They're going to say, did it write good code for
me? Is that answer to that code question correct or not? Did it cite its sources? Does it write a good test for me?
And it's not going to be based on hype.
So we just see a lot of, it's kind of like eating your vegetables work.
That's what we're doing.
Sometimes it's tempting when I see these other companies come out with these, you know, super
hyped up promises that, you know, ultimately I think we all try their products and it doesn't
actually work.
We do not want to be that kind of company, even though that could probably juice some installs or
something like that. We want to be the most trusted, the most rigorous. And if that means
that, you know, we don't come up and your, your Google search autocomplete, well, I hope that we
solve that by the time Cody is GA in December, but so be it because our customers are loving it.
Our users are loving it. And we're just so laser focused on this accuracy metric. And by the way, that accuracy
metric, we only can do that because of the context that we bring in. We look at when we're trying to
complete a function, where else is it called across your entire code base? That's what a human
would look at to complete it. That's what the AI should be looking at. We're the only one that does that. We look at all kinds of other context sources. And it's taken a lot of discipline, because there is
a lot of hype, and there's a lot of excitement. It's tempting to do all this other stuff. But I'm
happy that we're staying really disciplined, really focused there.
The advantage I think you're alluding to directly is that Sourcegraph has the understanding of the code base is that it has already available to it.
That might require some understanding of how Sourcegraph actually works, but I think to be
quick about it, that you sort of ingest one or many repositories
and Kodi operates across those one or many in an
enterprise. You mentioned a couple different companies, pick one of those and apply it there.
Whereas, famously and infamously,
GitHub, not X, Copilot, was
trained primarily on code available out there in the world,
which is not your repository. It's sort of everybody else's. You sort of inherit
to some degree the possibility of error as a result
of bad code elsewhere,
not code here, so to speak.
I think Tab9 offered similar where they would train an artificial intelligence
code tool that was based upon your own code's understanding,
although I'm not super deep and familiar with exactly how they work.
We had their CEO on the podcast, I want to say about two years ago again,
so we're probably due for a catch up there to some degree.
But I think it's worth talking through the differences
because I think there's an obvious advantage
with Sourcegraph when you have that understanding.
And not only do you have that understanding,
like you said, you've done your reps.
You've been eating your vegetables
for basically a decade.
You know what I'm saying?
So you've kind of earned the efficiencies that
you've built into the team, you know, into the code base and into the platform to get to this
understanding for one, and then actually have an LLM that can produce a result that's accurate
is step two. You already had the understanding before, and now you're layering on this advantage,
I think is pretty obvious. Is a lot of your focus, it sounds like, is on vertical in terms of current customer base versus horizontal across the playing field.
You probably are going after new customers that may be attracting new customers, but it sounds like you're trying to focus on your reps on the customers you already have and embedding further within.
Is that pretty accurate?
What's your approach to rolling out coding?
How do you, how do you do that?
Here's my order of operations.
When I, every three hours, look at our charts.
First, I'd look at what is our accuracy?
Every three hours?
Oh yeah.
Yeah.
Do you have an alarm or something?
Or is this like a natural built-in habit you got?
I think natural built-in habit.
So first I look at what is our
accuracy, our completion acceptance rate, and how is that trending broken up by language, by editor,
so on. First thing I look at next, I look at latency. Next, I look at customer adoption.
And next I look at DAU and retention. And that gets all this broad adoption and everything is
growing. Everything is growing
in a way that makes me really happy. But the first and most important thing is a really high quality
product. That is what users want. That's what leads to this growth in users. But that's also
what helps us make Kodi better and better. That's what helps us make Kodi so that it can do more of
the grunt work or whatever parts of the job the developers don't like.
If we were just to be at every single event
and we had all this content,
we could probably get our users higher faster
than making the product better,
but that's not a long-term way to win.
And so instead we're seeing
how do we use our code graph more?
How do we get better entire code-based references?
How do we look at syntactical clues?
How do we look at the user's behavior?
How do we look at, of course, what they've been doing in their editor recently, like
Copilot does, but how do we take in other signals from what they're doing in their editor?
How do we use our code search?
How do we use conceptual search and fuzzy search to bring in where this concept of
GitLab authentication exists elsewhere in their code, even if it's in a different language?
How do we bring in really good ways of telling Kodi what goes into a really good test?
If you just ask ChatGPT, hey, write a test for this function, it's going to write some code,
but it's not going to use your languages, your frameworks, your conventions, your test setup and teardown functions. But we
have taught Kodi how to do that. That's all that stuff that we're doing under the hood,
but we don't need developers to know about that. What they need to see is just this
works. The code that it writes is really good. And by the way, with the things I mentioned,
those are six or so context sources that if
you compared other Code AI tools, they're maybe doing one or two.
But we're not stopping there.
Because take a simple example.
If you want the Code AI to fix a bug in your code, well, it's probably got to go look at
your logs.
Your logs are probably in Splunk or Datadog or some Elk stack somewhere.
And so we're starting to teach Cody how to go to these other tools.
Your design docs are in Google Docs.
You probably got tickets in Confluence that have your bugs.
That's important for a test case.
And you also have your product requirements in Jira.
Jira, Confluence.
You want to look at the seven static analysis
tools that your company uses to check code and that's what should be run. So all these other
tools, Kodi will integrate with all of them. And they come from so many different vendors.
Companies have in-house tools. And that ultimately is the kind of context that any human would need
if they were writing code. And again, the AI needs that context too. We are universal. We've always been universal for code search,
no matter whether your code is in hundreds of thousands of repos or across GitHub, GitLab,
Bitbucket, and so on. And now it's, well, what if the information about your code,
the context by your code is scattered across all these different dev tools? A good AI is going to
need to tap all of those. And that's what we're building.
And then you look at other tools from vendors
that are maybe in the future,
their code AI will tap their version of logging,
their internal wiki.
But very few companies use a single vendor suite
for everything and are totally locked in.
So that universal code AI is critical.
And that's how we're already ahead today with context that leads to better accuracy.
But that's how we stay ahead.
And developers have come to look at us as this universal, this independent company that integrates with all the tools they use and love.
So I think that's going to be a really long-term enduring advantage.
And we're putting a ton of investment behind this.
We're putting the entire company behind this.
So it takes a lot of work to integrate with dozens and dozens of tools like this.
For sure. What does it take to sell this? Do you have a sales organization? Who does that sales organization report to? Does that report to both you and Beyond collectively or you because you're CEO? Or is there somebody beneath you they report to and that person reports to you and you know whenever you go to this metrics every three hours and you see let's say a customer that
should be growing at a rate of x but they're not do you say hey so and so go and reach out to
so and so and make something happen or you know get a demo to them because we're really changing
you know the world here and they need to be using this world changing thing because we made it and they're already using us and all the good things. How does action take place? How does execution
take place when it comes to really winning the customer, getting the deal signed? Are they
custom contracts? I see a way where I can sign up for free and then also contact. So it sounds like
it's not a PLG, kind of PLG-esque. Like you can kind of, you can't really, you can start with the free tier, but are most
of these deals, are they like homegrown?
Is there a sales team?
Walk me through the actual sales process.
Yeah, everyone at Sourcegraph works with customers in some way or another.
And we've got an awesome sales team.
We also have an awesome technical success team that goes and works with users that are customers.
We see a few things come up.
When I look at a company, sometimes I'm like,
man, if every one of your developers had Kodi tomorrow,
they would be able to move so much faster.
And yet, I can't just think that and expect that to happen.
So one of the reasons that we see companies slower to adopt code AI than perhaps they
themselves would even like to is they're not sure how to evaluate it.
They're not sure how to test it.
They've got security and legal questions, but sometimes they want to see what is the
improvement to developer productivity.
Sometimes they want to run a much more complex evaluation process for code AI than they would for any other tool out there, just because there's so much scrutiny and nobody wants to mess this up.
So, you know, what we advocate for, what GitHub advocates for is there's so much latent value here.
Look at accuracy.
Look at that completion acceptance rate.
And that is the quality metric. And then there's a lot of public research out
there showing that if you can show a favorable completion accuracy rate inside of a company,
then that will lead to productivity gains rather than having to do a six-month-long study
inside of each company. So that's one thing that helps.
Another thing is sometimes companies say, we want to pick just one code AI tool.
And I think that's not the right choice.
That would be like a company picking one database in the year 1980 and expecting that to stick
forever.
This space is changing so quickly and different code AI tools have different capabilities.
So we always push for get started with the people that are ready to use it today rather
than trying to make some big top-down decision for the entire organization.
Okay, so two co-founders deeply involved day-to-day.
One thing I really appreciate, and I often reference Sourcegraph and I suppose you indirectly
by mentioning Sourcegraph, sometimes you by name, you and Beyond by name, but sometimes
just the co-founders.
So I lump you into a moniker of the co-founders.
And I will often tell folks like, hey, if you're a CEO, like I often talk to a lot of different CEOs or founders and they really struggle to speak about what they do.
They literally cannot explain what they do in a coherent way very well.
It happens frequently and those things do not hit the air. Let's just say, right? We podcast
primarily, or I have bad conversations about possible partnerships and possible working with
them. And it's a, it's a, uh, it's a red flag for me. i'm talking to a ceo in particular that has a challenge describing what
they do i'm just like do we really want to work with them but you can speak very well congratulations
you and beyond are like out there as almost mouthpieces and personas in the world not just
to sell source crap but you really do care i think you both do a great job of being the kind of folks who co-found a lead that can speak well about what you do, why you're going the direction you're going.
And that's just not always the case.
How do you all do that?
How do you two stay in sync?
Has this been a strategy or did you just do this naturally?
What do you think made you all get to this position to be
two co-founders who can speak well about what you do? We have learned a lot since we started
Sourcegraph on this in particular. And even when describing Sourcegraph, we say code search,
and now we also do code AI. And I think some people are definitely relieved when they ask,
hey, what does Sourcegraph do? That it's four words, because I think there's a lot of companies where they do struggle to
describe what they do in four words, and yet we were not always at this point. I'm coming
here from a position where we have a lot of customers. We validated that we had product
market fit, that a ton of people use those products. And then I can say that. But before we
had that, there was a lot of pressure on me from other people and for me internally to make us
sound like more than code search, because code search feels like a small thing, which seems
silly in hindsight. Does Google think that search is a small thing? No. But there was a lot of
pressure to say we're a code platform, platform, a developer
experience platform, or that we revolutionize and leverage and all this stuff. There's a lot
of pressure, but nothing beats the confidence of product market fit of having a lot of customers
and users just say what you actually do. And one way we started to get that even before we had all that external validation was
Biang and I use our product all the time.
We code all the time.
I don't code production features as much, but we fundamentally know that code search
is a thing that is valuable, that code AI is a thing that's valuable.
And we felt that two weeks after we started the company.
We were building Sourcegraph and we were using Sourcegraph. And for me, it saved me
so much time because it helped me find that someone had already written a bunch of the code
that I was about to write for the next three weeks. So it saved me time in the first two weeks. And
from then it's clicked. So I think, you know, as a founder, use your product. And if you're not
using your product, make it something, make it so good that you would
use it all the time.
And then, you know, iterate until you find the thing that starts to work and then be
really confident there.
But it's tough until you've gotten those things.
That's cool, man.
It does take a journey to get to the right place.
I will agree with that.
And just know that out there you have Adam Stachowiak telling Tell him, folks, the way to do it is Sourcegraph.
Thank you.
You guys are great co-founders.
You guys seem to work great together.
I see you on Twitter having great conversations.
You're not fighting with people.
You're not saying that you're the best.
You're just sort of out there kind of iterating on yourselves and the product and just showing up. And I think that's a great example of how to do it in this world where all too often
we're just marketed to and sold to.
And I don't think that you all approach it from a, we must sell more, we must market
more.
That's kind of why I asked you like the sales question, like, how do you grow?
And you didn't fully answer it.
And that's cool.
You kind of gave me directional.
You didn't give me particulars.
That's cool.
Yeah. Well, look, we, if you just take the customers that we have today,
we could become one of the, probably the highest adoption code AI tool, the highest value code
tool, just by getting to all the devs in our existing customers, not even adding another
customer. And that just seems to me to be a much better way to grow through a truly great product that everyone can use, that everyone can they spread, but where they, it gets better when
they use it with other people. That's the only thing that matters. Anything else you're going
to get to a local maximum. Very cool. Okay. So we're getting to the end of the show. I guess
what's next? What's next for Cody? Give us a glimpse into what's next for Cody. What are you
guys working on? For us, it's really two things. It's keep increasing that accuracy. Just keep
eating our vegetables there. Maybe that's not the stuff that gets two things. It's keep increasing that accuracy. Just keep eating our
vegetables there. Maybe that's not the stuff that gets hype, but that's the stuff that users love.
And then longer term, over next year, it's about how do we teach Kodi about your logs,
about your design docs, about your tickets, about performance characteristics, about where it's
deployed, all these other kinds of context that any human developer would need to know.
And ultimately, that's what any Kodi AI would need to know if it's going to fix a bug,
if it's going to design a new feature, if it's going to write code in the way that
fits your architecture. And you don't see any code AI tools even thinking about that right now.
But that's something where I think we have a big advantage because we're universal.
All those pieces of information live in tools from so many different vendors, and we can
integrate with all of them, whereas any other code AI is going to integrate with the locked
in suite.
And you're probably not using whatever vendor's tools for a wiki, for example, and their logs
and all that.
So that's a huge advantage.
And that's how we see code AI getting smarter and smarter, because it's going to hit a wall unless it can tap that information. And you already see other Code AI tools hitting a wall, not getting better much over the last one or two years because they cannot tap that context. It's all about context, context, context, whether you're feeding that into the model at inference time, whether you're fine tuning on that. It's all about the context.
So that's what we're going to be completely focused on.
And we know the context is valuable if it increases that accuracy.
And what a beautiful situation that with this incredibly complex, wide open space that you
actually can boil it down basically to a single metric.
So that's our roadmap.
Just keep on making it better and smarter and in ways that means developers are going to say,
wow, it wrote the right code.
And I didn't think that it could write an entire file.
I didn't think it could write many files.
I didn't think it could take that high level task
and complete it.
That's what we're going to be working toward.
Well said.
Very different conversation this time around
than last time around.
And I appreciate that.
I appreciate the commitment to iteration,
the commitment to building upon the platform
you believed in early on to get to this place.
And yeah, thank you so much for coming on, Quinn.
It's been awesome.
Yeah, thank you.
So this episode did sit in cold storage for a moment,
but it was so good.
We had to make sure we put it out there.
And I'm always a big fan of talking to Beyond Lou or Quinn Slack from Sourcegraph. Those two
co-founders are so impressive to me. One of my favorite things about Sourcegraph is how heavily
involved in the outreach and public opinion and public dialogue happening around their ecosystem.
They're very, very vocal. Some CEOs, some CTOs, some co-founders hire people and let them be
the spokesperson. But Byung and Quinn lead the way as co-founders, as CEO and CTO who are out there speaking to the community.
And that's impressive.
Very impressive.
Obviously, also a big fan of Sourcegraph and happy to work with them anytime we get a chance.
So there is a bonus for our Plus Plus subscribers.
Yes, it is better.
It's better.
It's always better in the plus plus zone. Change law dot com slash plus plus.
Drop the ads.
Get closer to that cool change.
Love metal bonus content like today.
Sometimes bonus episodes, free sticker pack.
Kind of cool.
And we love you for supporting us directly with your hard earned dollars.
Truly.
We appreciate it.
OK, so big thank you to our sponsors today,
Crab Nebula, huge launch for Crab Nebula Cloud,
crabnebula.dev slash cloud.
Check it out.
Tailscale, we love them.
You know I love Tailscale.
I got a tail net.
I'm happy to promote it.
Tailscale.com.
And also our friends at Image Proxy,
processing images in real time for the front
end web such a cool service check them out at image proxy dot net they are open source and
they have a pro version that has advanced features all distributed via docker hub so cool and of Fly.io, the home of changelog.com, century.io, and also typesense.org. Use the code changelog
to get $100 off the team plan for Century. Check them out at century.io. Launch an app for free
right now at fly.io and spin up a super fast in-memory search using TypeSense
Cloud. Check them out at
typesense.org. Also,
open source. Those beats,
they bang because
Breakmaster brings banging beats, and
we love them, and I hope you love them.
Today we launched on the feed
the newest album, Dance Party,
is available as a podcast
right before this one in the feed
if you're in the master feed it might be a few beforehand
but I don't know
but it is in the feed
so listen to Dance Party right here in your podcast client
or go on Spotify or iTunes or Bandcamp or wherever
you want to listen to our beats
okay that's it this show's done
thank you so much for tuning in
we'll see you on friday