The Changelog: Software Development, Open Source - Reaching industrial economies of scale (Interview)
Episode Date: March 12, 2025Beyang Liu, the CTO & Co-founder of Sourcegraph is back on the pod. Adam and Beyang go deep on the idea of "industrializing software development" using AI agents, using AI in general, using code gener...ation. So much is happening in and around AI and Sourcegraph continues to innovate again and again. From their editor assistant called Cody, to Code Search, to AI agents, to Batch Changes, they're really helping software teams to industrialize the process, the inner and the outer loop, of being a software developer on high performance teams with large codebases.
Transcript
Discussion (0)
What's up?
Welcome back.
This is the change law.
We feature the hackers, the leaders, and those who are helping software teams
achieve industrial economies of scale.
Yes, Byung-Loo, CTO and co-founder of Sourcegraph is back on the pod.
And we're going deep on this idea of industrializing software development teams using AI agents, using AI in general, using code generation.
So much fun stuff is happening in and around AI and Sourcegraph continues to innovate again
and again.
From their editor assistant called Cody to code search to AI agents to batch changes.
They're really helping software teams
to industrialize the process,
the inner and the outer loop of being a software developer
on high performance teams with large code bases.
So if that's you,
or you aspire to be one of those developers,
well then this episode is for you.
A massive thank you to our friends
and our partners over at fly.io. Fly gives you the most
flexible and powerful compute platform on any public cloud. Over 3 million apps have launched
on Fly and we're one of them and you can too. Learn more at fly.io. Okay, let's industrialize. realize.
Well, friends, I'm here with a good friend of mine, David Shue,
the founder and CEO of Retool.
So David, I know so many developers who use Retool to solve problems, but I'm curious, help me to understand the specific user,
the particular developer who is just loving Retool. Who's your ideal user?
Yeah, so for us, the ideal user of Retool is someone whose goal first and foremost
is to either deliver value to the business or to be effective. Where we
candidly have a little bit less success
is with people that are extremely opinionated
about their tools.
If for example, you're like,
hey, I need to go use WebAssembly.
And if I'm not using WebAssembly, I'm quitting my job.
You're probably not the best Retual user, honestly.
However, if you're like,
hey, I see problems in the business
and I wanna have an impact
and I wanna solve those problems,
Retual is right up your alley.
And the reason for that is Retual allows you to have an impact so quickly.
You could go from an idea, you go from a meeting like, hey, you know, this is an
app that we need to literally having the app built at 30 minutes, which is super,
super impactful in the business.
So I think that's the kind of partnership or that's the kind of impact that we'd
like to see with our customers.
You know, from my perspective, my thought is that, well,
Retool is well known.
Retool is somewhat even saturated.
I know a lot of people who know Retool,
but you've said this before,
what makes you think that Retool is not that well known?
Retool today is really quite well known
amongst a certain crowd.
Like I think if you had a poll like
Engineers in San Francisco,
or Engineers in Silicon Valley even,
I think it'd probably get like a 50, 60, 70% recognition of Retool. I think where you're less likely to
have heard of Retool is if you're a random developer at a random company in a random location like the
Midwest for example, or like a developer in Argentina for example, you're probably less
likely. And the reason is I think we have a lot of really strong word of mouth from a lot of Silicon Valley companies
like the Brexas, Coinbase, Doordash, Stripes,
et cetera, of the world.
There's a lot of chat, Airbnb is another customer,
Nvidia is another customer,
so there's a lot of chatter about Retool in the Valley.
But I think outside of the Valley,
I think we're not as well known.
And that's one goal of ours to go change that.
Well, friends, now you know what Retool is,
you know who they are, you're aware that Retool exists.
And if you're trying to solve problems for your company,
you're in a meeting as David mentioned,
and someone mentions something where a problem exists,
and you can easily go and solve that problem
in 30 minutes, an hour, or some margin of time
that is basically a nominal amount of time.
And you go and use Retool to solve that problem.
That's amazing.
Go to Retool.com and get started for free or book a demo.
It is too easy to use Retool and now you know.
So go and try it.
Once again, Retool.com. Can we go back as far as you'd like to?
Because I kind of want to know how we got here.
I go to your homepage today, and I see industrializing software development AI agents.
And that was not at all the headline even a few years ago.
And I feel like, you know,
Sourcegraph has taken 15 years to become
not so much successful,
but like it's been like 15 years for you.
And I would say seven of those years
may have been like really hard years.
And I kind of want to just like not let the people
come into Sourcegraph today
think it's been the easy road for you.
Cause I kind of kinda wanna know the
permutations of code intelligence to AI agents,
how did we get here for you?
Yeah.
Yeah, so first off, I think we're in year 12 right now.
So we've definitely been around-
I gave you three.
I gave you three.
Yeah, yeah.
We're working our way up to 15.
I think we'll be here for the next 50 years.
So it's still early days.
Actually like it's funny that you should mention, you know, industrialized software with agents
and how that's kind of a shift.
Maybe I don't know if you have like show notes that you can link to, but I can link you to
a version of our seed deck back in like, you know, April, May 2013 when we're pitching the company for the first time.
It has this phrase, industrialized software engineering. That part of the mission has stayed
largely constant. The high level goal for us was really about basically making professional
software engineering as enjoyable and as efficient as hacking on
your side project is because that was really the motivator for us in starting this company.
It was the delta between every programmer starts from a point of love or delight at
some point.
Like the reason that you get into programming is there is this joy of creation, the spark
of creation that everyone experiences at some point, you know,
whether it's at hello world or when you first get your first working program to
run and it's cool and it does something and you share it with your friends.
I think every,
everyone who's working as a programmer is to some extent trying to chase that
original high, you know, that's,
that's the dopamine rush that that makes the job joyful.
And it also maps to doing useful stuff.
You get joy out of shipping features that impact your users' lives and actually get
used.
But then you contrast that with the day-to-day of a professional software developer, most
of whom are working in a large existing code base that you know, they didn't write themselves
that is the product of the contributions of hundreds or thousands or you know tens of thousands of
shared hands and
That experience is very different and what we wanted to do is solve a lot of the problems that create toil for professional software engineers in
Large production code bases and make it so that it's possible to focus on this like creative part of the problems that create toil for professional software engineers in large production code bases
and make it so that it's possible to focus on this like creative part of the job.
So the initial way that we solved that with, you know, the technology landscape at the time
was focusing on search because that to us was the thing that we spent a lot of our time working on.
You know, we got our career started out at Palantir, but Palantir by extension meant very large banks
and enterprise code bases because that's
the team that we were on.
And so the core problem there was just figuring out
what all the existing code does and figuring out
what the change you need to make,
how that fits into the broader picture,
and what sort of dependency or constraints
that you're working with.
And so that was the first thing that we brought to market.
AI was always sort of in the back of our minds.
Like I had done a concentration in machine learning
while I was at Stanford.
I was actually, Daphne Kohler was my advisor
and had published some research there
in the domain of like computer vision.
In those days, like the state of the art models
weren't nearly as good.
This was, you know, pre-ChatGBT, pre-Transformer,
pre even like deep neural net revolution.
So in those days, the conventional wisdom was neural nets worked in the 80s for limited
use cases, but they're mostly toy use cases and real machine learning engineers use support
vector machines and boosted decision trees and things like that.
So it's a different, very different world.
And we're always keeping our eye on the evolution
of this technology.
And we actually started integrating
current large language model embeddings,
base signals into the search ranking starting in early 2022.
And so that was something that we'd kind of
been playing around with.
And then once Chatchat BT landed,
we were actually pretty well situated in terms of like our understanding the technology to be able to roll
that in to our platform. We launched the first sort of like context aware chat, you know, chat
that pulls from the context of your broader code base and pages that into the context window and
uses that to steer the generated code or the question answering that the LM provides for you.
And that helped launch us into the age of AI, so to speak,
because that was, I think now it's like table stakes, right?
Like context aware, cogeneration, everyone has that
because everyone realizes that that is absolutely essential
for making AI useful in the enterprise.
But we were first to market there
and that helped us establish our presence inside folks like Palo Alto Networks and Lytos
is a big government contracting agency and Booking.com, the largest travel site in the
world.
All these very large enterprises with complex coding needs that have adopted Sourcegraph
and our AI coding system, Codi, as their preferred essentially like platform for accelerating,
automating, industrializing software development.
When you say industrializing, what does that mean to you?
Like, I think I understand what that means.
What does it mean to industrialize
software engineering, like practically?
Yeah, that's a great question.
And like this word industrialize,
it's something that we've gone back and forth on
because it's special. It sounds cool, I dig it. I mean, I like it too. I think if you go back to
like 2013, I think there were a lot of like, a lot of people read sort of like negative connotations
into it. You know, like, what are you trying to do? Like turn every development organ to a feature
factory, you know, because like, that's what I think when I think of the industrial revolution.
These days is a little bit different because the whole world is talking about re-industrializing
with all the geopolitical shifts going on in the world, the macroeconomic shifts.
I think a lot of people are thinking about how can we build the gigafactories of the future
and restart that process, that tech tree exploration in the West.
And so now I think it carries a very different connotation. But what we meant by it and what we mean by it today is really it started with this
observation when you think about software engineering as an industry. Software engineering
is actually quite unlike every other industry in the world in one specific dimension, which is every other industry,
as you scale production, you get economies of scale, right?
Like you build a larger factory, you can produce the next incremental thing that you're building
at a cheaper cost or more efficiently.
So like things get better and cheaper as you scale up.
In software engineering, it's the exact opposite. In fact,
like at best case, they asymptotically approach some upper bound. Worst case, and I think the
reality in a lot of organizations, productivity goes down after you scale past a certain point.
And as a consequence, every piece of software that becomes successful very quickly becomes a victim of its
own success. Because if your software gains users and gains revenue, it will also acquire feature
requests and bug reports. And as a consequence, you'll hire more developers, you'll increase the
number of contributors to the codebase, the codebase will grow over time, you'll lose that cohesiveness of vision.
And then it's like a frog in boiling water approach.
Like every incremental day you don't really notice it, but one day you wake up and the
codebase is a mess, there's all these constraints, it takes forever to get like a simple feature
across the finish line.
You wake up and now you can't get anything done.
And there's kind of like two consequences to this that are very salient.
One is that means that the vast majority of software that actually gets used today sucks
because any, again, like any successful piece of software with a lot of users becomes like
this.
So like by almost definitionally, like any piece of software that you're using, you're
using it because it was a successful piece of software.
But by virtue of its success, it can no longer move quickly.
The developer is building it can no longer move quickly and build new things and keep that up to date.
The second consequence for this is the disruption cycles actually occurs much more quickly in pure software than in any other industry.
Like every other industry,
we talk about the innovators dilemma.
It's like, okay, yeah, I get it.
Like every once in a generation,
something comes down the line that disrupts,
like shakes the foundation of this very industry.
And then you get a whole new generation
of companies that emerge.
In software, it's like literally like every five to 10 years
that happens naturally because whatever,
like you can literally see like in the past,
like five or six years alone,
we've gone through multiple generations
of like accounting software for like small businesses, right?
Like every couple of years is like a new YC company
that basically does the same thing,
but it's like, oh, we're gonna do it better
because the old thing sucks now.
And so like this disruption cycle, there's something more than the innovator's dilemma
at play in software disruption. It's this kind of like phenomenon of as the code base grows,
due to its success, it also falls victim to the very forces that made it successful,
because you're going to grow the number of lines of code,
you're gonna add more functionality,
you're gonna have a hard time preserving the cleanliness
and sane architecture of the original code base.
And then at some point, you can no longer make forward
progress and then some two person startup comes along
along the line, down the line and like does it a lot better
and then eats your lunch.
Well, they eat your lunch for a bit and then you acquire them or they totally eat your lunch and
they destroy you right? Yeah yeah exactly. That's sometimes what happens too. Exactly so like that's
that's what we mean by industrialized software engineering. I think like if we can if we can
tackle enough of the toil and also give your senior technical leadership a lever to project their vision for how the
architecture should evolve.
We can actually get software as an industry to be a real industry, like actually have
economies of scale where things can actually get more efficient.
Imagine that.
It's like mind blowing.
Like, oh, things actually get more efficient as the code base grows.
I think that's just like very antithetical to the way people think right now.
But I think it's certainly within the realm of possibility given modern AI in conjunction
with all the other like code based understanding technologies that we've built over the years.
So given your history with machine learning way before it was artificial intelligence or AI way back.
And you're here now.
You've iterated, you've sort of stuck to your gun, so to speak, of industrializing or solving
this complex code based problem from the beginning with search now to agents with AI.
How did you know? What tea leaves did you read?
Or were you just super passionate
because of your time at Palantir?
Because they obviously had complex code bases
and you were like, well this sucks.
I'll let you solve this problem.
I'll just keep solving it over and over and over
until we reach critical mass.
Yeah, I would say, I'm not gonna say that I have the ability to kind of like see the future.
I'm definitely not going to claim that I could see how the landscape of AI would unfold from the vantage point of being in 2013.
There's certainly a lot here that is frankly just like, you know, I did not anticipate.
It was 12 just like, you know, I did not anticipate. It was 12 years ago. 12 years ago, it was a lifetime and the technology was very immature and different back then.
But what I did know, I think Quinn and I both felt deeply
was number one, we're both really, really passionate
about programming and building software.
That seemed to us like the biggest leverage point. And it was really a
combination of like, hey, this is a craft that I care deeply about. And I could see myself spending
the rest of my life refining it and moving it forward because it does seem it almost rises
the level of being something like spiritually fulfilling. You know, if you want to get like
philosophical is this whole like, you know, it from bit philosophy of like, you know, what does it
mean to exist and, you know, what does information relate to physical being and
all that and you can get into the whole discussion of like, you know, Turing
Completeness, what does it mean, how does it relate to our theory of mind and the
soul. Anyways, there's kind of like a... Do you have thoughts on that?
You want to share?
We can get it to... I don't want to like go down that rabbit hole just yet. But like,
the point I want to get across here is that like we both feel that this is a really cool
thing to be working in. And that's one of the things that we chose like this path
as the professional path that we want to pursue. The flip side of that was our experience at Palantir showed us that the pain that we felt
in trying to practice this craft in a professional setting is felt across the entire industry
and gets exponentially more critical the more impactful your software is.
Because the more impact your software is driving, the more impact your software is driving,
the more users and revenue it has or is generating
and the messier the code base is.
And so it was kind of like the marriage of these two things.
It was like personally fulfilling to be thinking about
like advancing the state of this craft
that you feel a deep personal connection to
and knowing that it's tied to just like this huge amount
of potential economic impact. And so like there, you know, our thinking was like to and knowing that it's tied to just this huge amount of
potential economic impact.
Our thinking was with the technology that we have today, if we focus on the fundamentals, building core technical capabilities
and then rolling those into delightful product experiences, there will be
a huge company here that we could spend the rest of our lives working on. Fun, I think it's fun, man, wow.
To be, to have seen not the future,
because you said you're not a clairvoyant,
you're not a future teller.
But you could certainly see the direction it's going,
and you said, I'm willing to keep doing this problem,
keep solving this problem for the rest of my career.
Yeah.
Wowed, okay.
And now we're at the age of literally AI agents.
We can essentially deploy these agents
similar to the way you would deploy a Cron job,
but now they're smart.
They're so much smarter now,
not just a dumb Cron job that requires logging
and metrics and observability and failures
and fall overs and system crashes.
Now you've got agents that can actually do some cool stuff.
You began with search, now you're here.
What do you think it is that is attracting folks
to the Sourcegraph platform today?
Is it the promise of what AI can do deployed at scale?
I think there's a couple different dimensions here.
I think the most direct thing that attracts people is that we are solving for the challenges that
developers face inside production code bases. So, you know, contrary to a lot of the other
kind of like AI coding things that you see, our focus has less been on like the indie developer who wants to show off, you know,
like a cool Flappy Bird clone on Twitter, which to be frank, like it's really, really
cool.
It's really cool that you can like, you know, one shot that now.
But our, our, our mission, the thing that like gets us out of bed every morning is,
is really to to enable the software
that drives the world to be built much more quickly
and much more robustly.
And so that's the thing that we're targeting.
And that's the reason why the customers
that have chosen Sourcegraph have chosen us
because they have large complex code bases
and they appreciate the other technologies
that we bring to the table that help us be effective in that setting.
And that brings me to this kind of like second dimension,
which is, you know, our technical strategy here
is transformers, LLMs, AI,
that's all really useful, it's game-changing technology,
but we think that there is a step function,
more capability and power you can unlock by combining that technology with some of the other technologies that we built around information retrieval, code search, mapping out the reference
graph and the knowledge graph of the code. We think these technologies are highly complimentary.
In some sense, they represent the two sides of
what traditionally has been called artificial intelligence. Before the current incarnation of what AI means,
there was this classic debate between the formal methods like context-free grammar,
Chomsky kind of party of artificial intelligence where everything's rules-based.
There's some philosophy behind it that like, you know, these these
grammars emerged from the mind of maybe some higher order being and our our our goal is to
like discover the platonic ideal, the hidden structures that connect all these things,
you know, very symbolic based. And then on the other end is like this statistical approach, which is like, we're not going to
be opinionated about like defining down a complete set of axioms or rules for describing
what language is.
We're just going to try to model it statistically and learn it from the data.
I think, you know, the age we're living in, it clearly we see clearly everyone sees clearly
the limitations of the Chomsky approach and it's very apparent to everyone the advantages of the data-driven approach now
with modern technology. But I think they're still actually like highly complimentary. There's a bunch
of productivity-oriented use cases that essentially combine a knowledge graph or some sort of symbolic
representation with generative models.
So you have the generative model providing
some notion of reasoning, but you combine that
with a symbolic representation to be very precise about how
things are connected.
And the composition of those two
yields way more useful use cases than if you just
use one or the other.
Maybe for context, give me, if you can, give me a picture, if you can paint a picture of a day in the life of an engineer working at one of these large scale enterprises that has this complex code base issue and you're removing their toil. What are the things that they have access to as a developer in this kind of ecosystem?
Let's say Palo Alto Networks, for example,
you mentioned them, I know they're a logo on your homepage,
so maybe you can use them.
Pick anybody, I don't care if it's them or not.
Somebody, give me an example of a developer or an engineer
on a team like that, what kind of tooling do you give them
or do they have access to when they
buy Sourcegraph, essentially?
Yeah, I say, so like the equivalent of day in the life for an engineer is like, what does it take to take or do they have access to when they buy Sourcegraph, essentially? Yeah.
So the equivalent of day in the life for an engineer
is, what does it take to take a feature from idea all the way
to getting into production?
It's one whole loop of the software development life
cycle, which we kind of model as this two loops that
are connected.
There's kind of like an inner loop,
which is your kind of re-evaluate print
loop in your editor, where you're kind of quickly
iterating on the idea and making it come together.
And then there's the outer loop, which is sort of like the ops
loop, right?
You plan out the feature.
You actually write it.
You push it up.
It gets reviewed.
It needs to pass CI-CD.
It gets rolled out.
And then after it gets deployed, you got to monitor
and observe it to deal with any kind of like production bugs
or things that emerge.
So like day in the life of like an enterprise developer
or a developer working on in the context
of a very large existing code base,
it's kind of like you have an idea for a feature.
What's the first thing you have to do?
Well, the first thing you have to do is you have to go
and acquire a bunch of context about the existing code base. So, you know, in the pre-source
graph world, what does that entail? It's a bunch of like, grepping for stuff in your
existing repository. Maybe you got to clone down a couple of additional repositories.
If you're very intentional about it, sometimes people just like don't bother because they're
like, ah, it's too much work.
I will just assume that this thing that I'm building doesn't exist yet.
So let me just go build it.
Sometimes it involves pinging your colleagues for help.
So like if you're onboarding to a new part of the code base or you're like a new engineer,
you're going to like go bug the senior engineer who was around when that code was written,
who's very short on
time and kind of crotchety because there's like five other people like you who have asked
them a very similar question in the past like two weeks.
So like that is a whole kind of like tooth pulling exercise.
And that can take you, I don't know, like weeks, even months, like in some of the code
bases that we worked in while we were at Palantir, it would literally take like months just to orient on like, hey,
this is the part of the code that actually needs modifying.
And you know, there's a bunch of false starts along the way, because you start writing something
only to realize like, oh, you should be modifying this other piece over there.
Okay, so like, that's just getting the point where you're ready to start writing the code,
right?
Once you start writing the code, you're in your editor, most of the code you need to write is probably boilerplate
or what we call boilerplate++. It's not exactly like something you can copy and paste, but
you basically want to find an existing example of using that one particular API that you
need to use. And then you'd kind of turn your brain half off and just pattern match against
that example, because that's the reference point.
So provided you found that example,
then you can pattern match against it.
If you don't find that example, then it's kind of this arduous,
like, hey, let me discover how to properly use this API,
because there's no docs on how to use it.
And someone's probably done this probably a dozen times before,
but I've never done it before. So now I have to essentially go and rediscover how to do it. And someone's probably done this probably like a dozen times before, but I've never done it before. So now I have to essentially go and like rediscover how to do that. So
you go through that exercise, there's multiple rounds of, of, of kind of like iterating on
it, finally get to the point where, okay, I'm ready to like push it up. You push it
up to review. Maybe there's multiple stakeholders that have to sign, sign off on, on the change, right? Like there's, there Like there's the people you work directly with, maybe that's part of your code view process.
Maybe you also made some modifications to some other shared libraries,
and now other teams that own that code have to also sign off on it.
A lot of times those other teams don't necessarily care too much about the change that you're trying to land,
because they have their own priorities
and they're incentivized to work on those
and not anything else that happens in the company.
So, who knows how long it'll take to get all the stakeholders
to sign off on that change.
Or maybe you realize through the review that you get
from like a senior engineer that this whole approach
was wrong because you didn't do enough search and discovery.
So it's almost like this game of shoots and ladders, like you have a potential to shoot
all the way back to scratch.
Like you just wasted, you know, a week, a month's worth of work because he built the
wrong thing.
Best case, you know, it takes, it takes a lot of like time getting people to approve
it.
You finally get it approved.
It rolls out to production.
Maybe it breaks something.
Maybe there's like a test that breaks
that you didn't catch locally and so on and so forth.
Once it gets into production,
there's a long tail of issues that could arise too
in terms of like, it triggered some bug
or maybe there was some like security vulnerability
that got introduced.
Anyways, I don't have to sketch this out.
It's already painful just describing this.
I'm wanting to quit right now.
I'm like, gosh, save me.
Yeah, and this is just like a simple feature, right? Like it could be the simplest thing,
like center this div or add support for this authentication protocol or whatever.
And most of your time is spent not on thinking about the end user problem
and mapping that to the algorithms and data structures
that you can use.
Like that's the fun part of the job.
Like if I could do that all day, every day,
I would do that all day, every day.
But most of it is spent on like box checking,
writing boilerplate, finding answers to questions
that you know have already been answered many times before.
And so like source graphs,
there's like targets all the kind of critical pain points
and slowdowns in that process.
So in the planning and search and discovery phase, that's where people use a lot of our
code search functionality. Or now we have a deep research for your code base-like feature
that's built into the web application where you can ask our AI a question about the repository.
It will go and do a bunch of searches and file readings on your behalf
and print out like what is usually a pretty reasonable summary
of the relevant files to read
and how those fit to the tasks that you're trying
to accomplish.
So automatically like that shortcuts you
from having to have and wait on all these like conversations
with humans that, let's be honest,
like developers as a collective group
were not the most personable or affable people.
I mean, many of us are, but speaking in terms of averages,
we tend towards introversion,
just spending time with our computers.
That's why we're doing what we do, right?
Plus, even the ones that wanna help, if you've answered the same
question from like three or four different people in the past week, you're at the point
where you're like, I just want to do my freaking work. I don't want to have to deal with any
of these other like things, even if they're like reasonable requests. So like we can shortcut
that process and just help you figure that out yourself through a combination of search
and question answering. Then within the editor, there's all this stuff
that reduces to boilerplate++, which
is in the sweet spot for what AI can do.
So you tag in the relevant file, or you
ask it to do a search for the relevant code
snippets on your behalf, and then ask it to generate code.
It generates the code.
You can refine that in the chat window, apply it into the file.
And that is just such a, it adds value in, you can refine that in the chat window, apply it into the file.
And that is just such a, it adds value in a couple different ways.
The first is like the raw time saved because you don't have to manually type out all the
characters for that, you know, boilerplate plus plus.
The second thing we've noticed is that it kind of changes the psychology a little bit
because it keeps you in flow state more.
Like if you don't have to page out all the working state in your brain, because now you need to go down this rabbit hole and like complete this side quest
to get your feature built, if Kodi can essentially do that for you, or even get you like 90% of the
way there, you essentially don't have to page out the context for doing the main thing. And as a
consequence, you can kind of remain in flow state for much longer.
You don't have this like context switching that is very destructive. And so like that is
that is where we probably see like an insane amount of productivity boost. I think like the
the number that Palo Alto quoted at us using their own kind of like internal metric or proxy
for developer velocity, it was depending on which
number you specifically is between 40 and 57%, which is a crazy speed up. That's like
a game changing speed up. And it turns out like it just makes the job more fun too, right?
Because like you don't want to spend all your hours writing boilerplate or pattern matching
against stuff that's already written. you want to get to the fun stuff
of the job, which is really think about the user experience
and then the underlying algorithms and data structures.
Once you push things up to review,
what we built in collaboration with a couple of our customers
and is now in early access, and probably by the time
this is released, we'll have open access for it,
is a review agent.
So this is code review that kind of understands how your organization does code review.
And so like the idea here is, you know, one, we want to automate the process of code review
because there's a bunch of comments that humans don't think to leave or just, you know, they
don't get around to leaving.
There's the old adage, like if you push up a 10 line
pull request, you get 10 comments.
But if you push up like a 10,000 line PR,
they'll just get a rubber stamp, like looks good to me.
Cause no human-
Too much, yeah.
Yeah, it's too much.
It's like, what else am I supposed to do today?
Other than review this PR?
I got a life to live, I got work to do.
Well, like an AI can just sit there and spend the time to surface like a bunch of common things
that you as a human, you know, it's probably like a waste of your time to check. Like all this stuff
is like automatable now with modern LLMs. So we want to do that. There's also kind of like
a psychological component here, where we don't see our code
review agent as being a complete replacement for human review yet, but it's at the point
where if it leaves at least a first pass of comments, number one allows you to you as
the author of the change to kind of iterate quickly against that.
And also it incentivizes the reviewer to go in and do a more thorough review too, because the comments kind of reflect
like a summary of the change itself.
And just having like a comment exist on the PR,
it just makes the task seem like already halfway done.
And so you-
Momentum.
Yeah, momentum.
It just, it leads people to like quicken the review cycle too.
Psychology, baby.
Yeah. I love that.
Yeah. It's the combination of like,
if you introduce efficiencies,
you also get these like nice second order psychological
effects that further speed up the development cycle.
Right, especially after a few cycles of this,
you begin to gain some trust for the system, let's just say.
I don't wanna anthropomorphize this thing,
but we tend to, sometimes we even say please or thank you to yeah
Our agents, you know, which is cool. Whatever, you know, you do your thing just wasting tokens basically. Yeah
But you begin to trust it so it's like well, I at least know I've got to start somewhere
And so this analysis of the code review
Makes me feel like even if I only had to do 10%,
it's better than 100%.
Because now I can actually do my job today,
I can do the two meetings I have planned,
and actually potentially get a review in
to potentially get this in production.
And that's a great thing.
Totally.
It's almost like a smoother on-ramp.
There's like an activation threshold
you have to get beyond to get started on a particular task,
like in this case, code review.
And without AI, it's like, oh, I'm dreading this
because it's like, it's gonna,
I need to get over that threshold.
Now we can do the first like one or two steps
and then that kind of like leads you into,
it's like the same thing as like, you know,
how some writers, they say like making a mark on the page, like
writing a, a, a, a, anything. Yeah. Anything like a very bad first draft.
That's just like, God awful.
Or like even artists just like drawing a squiggle on the paper.
It kind of unblocks you.
It gets you past this like blank slate mental block that I think a lot of people
suffer from, especially if it's a task that you associate with toil, which I think a lot of people suffer from especially if it's a task that you associate with
Toil which I think a lot of people do it's like I can't even do this. Yeah, it's yeah
I've been there. I've definitely had to do some code reviews. I'm just like
Yeah rubber stamp that thing looks good. Yeah. Yeah, let's just let's test in production. Well, you know, let's yeah. Yeah, right
It's like you wrote a lot of code.
I think you probably understand this, you know,
much better than I do.
Let's just, you know, we'll say it looks good
and then we can both get on with our lives.
I think that some version of that conversation happens
all the time inside enterprises.
And, you know, it happens out of necessity,
but it's not good that it happens, right?
Cause like you have reviewing standards for a reason, and that's how bugs and security vulnerabilities
and bad architecture and loss of Augment Code.
Augment is the first AI coding assistant that is built for professional software engineers
and large code bases.
That means context aware, not novice,
but senior level engineering abilities.
Scott Flexfery, who are you working with?
Who's getting real value from using Augment Code?
So we've had the opportunity to go into hundreds
of customers over the course of the past year
and show them how much more AI could do for them.
Companies like Lemonade, companies like Codem,
companies like Lineageem, companies like
Lineage and Webflow, all of these companies have complex code bases. By taking Kodem, for example,
they help their customers modernize their e-commerce infrastructure. They're showing up
and having to digest code they've never seen before in order to go through and make these
essential changes to it. We cut their migration time in half because they're able to much more rapidly ramp,
find the areas of the code base,
the customer code base that they need to perfect
and update in order to take advantage of their new features.
And that work gets done dramatically more quickly
and predictably as a result.
Okay, that sounds like not novice, right?
Sounds like senior level engineering abilities.
Sounds like serious coding ability required
from this type of AI to be that effective.
100%.
You know, these large code bases,
when you've got tens of millions of lines in a code base,
you're not gonna pass that along as context to a model, right?
That is, would be so horrifically inefficient.
Being able to mine the correct subsets of that code base
in order to deliver AI insight
to help tackle the problems at hand.
How much better can we make software?
How much wealth can we release and productivity can we improve
if we can deliver on the promise of all these feature gaps and tech depth?
AIs love to add code into existing software.
Our dream is an AI that wants to delete code,
make the software more reliable rather than bigger. AIs love to add code into existing software. Our dream is an AI that wants to delete code,
make the software more reliable rather than bigger.
I think we can improve software quality,
liberate ourselves from tech debt and security gaps
and software being hacked and software being fragile and brittle.
But there's a huge opportunity to make software dramatically better.
But it's going to take an AI that understands your software,
not one that's
a novice.
Well, friends, augment taps into your team's collective knowledge, your code base, your
documentation, dependencies, the full context.
You don't have to prompt it with context.
It just knows.
Ask it the unknown unknowns and be surprised.
It is the most context aware developer AI that you can even tap into today.
So you won't just write code faster.
You'll build smarter.
It is truly an ask me anything for your code.
It's your deep thinking buddy.
It is your stay in flow antidote.
And the first step is to go to augment code.com.
That's A U G M E N T C O D E dot com.
Create your accounts today.
Start your free 30 day trial.
No credit card required.
Once again, augmentcode.com.
So is that where, I think we're here
because we talked about the day in the life of.
Where do we go from after code review?
Like you're painting the landscape of source graph,
what a engineer may have access to
to reduce or remove some of the toil in the process.
Yep.
So broadly speaking, I think where the day in the life
of a developer moves to is,
yeah, I talked about inner loop and outer loop before.
Our vision for these two loops is essentially like, accelerate the inner loop. Like we should provide facilities that automate boilerplate
and toil inside the inner loop such that it feels like we're building leverage around
human creativity. Because that is still the thing that is like, quote unquote, like out
of distribution of the models and something that is not like yet replicable
with today's LLMs.
And I don't see kind of like a clear line of sight
to actually like replicating that.
I think that is truly the essential part of the job.
That is the fun part of the job.
It's like thinking of a new idea,
connecting that with the user need,
and then finding just like the ideal best data structures
and algorithms that fit
within your existing architecture to solve that.
That is the essence of the job, and that's
the thing that we want to enable people to spend 99%
of their time on, as opposed to 1% of their time on.
So we're accelerating in the inner loop.
And then the outer loop, I think,
is we want to automate as much of that as possible.
Because everything in the outer loop is kind of like a factory.
You spend all this time refining the creative aspects
in the inner loop.
And now the outer loop is just like,
how do I get this thing that I have produced in my developer
environment into production?
So the whole process of doing the review and refining it,
making sure there's adequate test coverage,
like doing an automatic rollback, adding the necessary telemetry.
In ideal world, that is all within the realm of automation. It's kind of like we built this
gigafactory in a way where if you've got it working in your development environment, everything else to get it into production should be like pattern matchable by the LLM.
So in the future, I
think the future that we want to work towards is like, doesn't matter what size code base
you're working in, it could be the smallest thing or could be like the largest oldest
code base in the world. You express your intent, we accelerate you getting to the experience
that you want in your development environment. And then thereafter, like the machine just takes your change
and lands it in production,
guaranteeing security, reliability and robustness.
Love those buzzwords, love those buzzwords.
I'm being serious.
I'm seeing this where you're going.
I was just half joking there.
That was some buzzword city there though.
I like that. Yeah, yeah.
Robustness.
Well, that's what you want, right?
Like it's-
You do want those things.
Like our tagline is like, move fast, but don't break.
You know?
Like-
Okay, sure.
I like that too.
And that is now possible.
Like with today's technology,
it is possible to enable like extremely fast
development cycle
that augments human creativity, but the things that could break your system, like a security
vulnerability or too much technical debt or a bug that wasn't caught because you didn't
think to add the appropriate test, all these things are in the realm of automation now.
Like they are automatable.
We just gotta go and automate them.
I, you know what, I'm all for that because there's,
there's scenarios where developers will wanna deploy code.
No, forget, oh, I didn't write or consider this one
security thing that I should, I should know about.
But because I'm a human, I got limited,
even emotional will, willpower,
which is a finite resource to any human being.
Right, we all have brains, and those brains
give us a certain amount of power.
Regardless of how smart you are,
you could have not had your Snickers that day,
so maybe you're Joe Pesci for a moment or two,
and you're just not, you're not thinking straight.
Or it's four o'clock and you're trying to get this thing over the line because time you know
And you don't write the test or you forget about a critical thing. We should have
Automations and checks and balances in our flow from the REPL what you're talking about
Yeah, this inner loop which says okay now it's out of my safe world, it's out of my safe idea world, my innovation world,
I've tested, I've done all the things I can.
Now it goes out to the outer loop,
which is that's where automation should live
because you wanna confirm is it a threat
or is it an ally, right?
Is this thing that has been done a threat or an ally?
And if it's an ally, to prod.
If it's a threat, let's check those threats.
Yep, that's exactly right.
And I think this willpower is a finite resource.
That is a really good way of phrasing it.
I think I'm gonna steal that.
We've seen it play out not just on the timeline
of like a day in the life,
but like we've seen it play out on the span
of like months and years.
So to give an example, we have a customer
where we partnered with them to build a code migration
agent that was targeting a project to retire all
these dead feature flags that they had strewn about the code
base.
Feature flags, you add them to do some experiment.
And then once experiment is over, like you decide,
okay, we're gonna either turn it to a hundred
or turn it to zero, but then like no one is really
incentivized to like remove the feature flag
from the source code.
So over time, you just, these things kind of like grow
linearly and they add to the complexity of the code base
because no one is quite sure just looking at the source,
like whether it's fully turned on or not.
And there's like, yeah, we'll break.
Which one of these branches actually get execute, you know, depending on like what, it's fully turned on or not.
And literally, this is the sort of thing where like,
the only people with the context to lead this conversation are the very most senior engineers in your organization
because they are the only ones who know the entire code base.
They know all the nuances of how these future flags can break
if you remove them improperly.
And they're the only ones you trust to initiate
like a migration that literally touches every single
repository in your code base.
So the joke is, like, you ever read, like, Carowack?
I saw the best and brightest minds of my generation.
The developer equivalent is like,
I saw the best and brightest developer minds
of my generation go insane because they had to spend
10 years of their lives working on a dead feature flag removal.
Instead of anything like interesting or fun,
like building new features for your users or customers.
Something that really matters basically.
Yeah, exactly.
And so when we ran that project with them,
it's still ongoing, so it's not through to completion yet,
but we're basically able to say like 80%
of these feature flag sites,
you can automate within the next month or so. So it's kind of like, you know, in sitting down and just like figuring
out how to like compose, like build a like an agent to do this migration, we've gotten
you within the realm of like retiring 80% of these within essentially like a month.
And then, you know, the remaining 20%, they're
a little bit trickier and those are probably involve, you know, some amount of human in
the loop, but already like it's like, we've got 80% of the way there within a month, this
thing is not going to take 10 years any longer. Right. And just, just think of the like, like,
that's a good chunkier life, right?
For multiple people, for multiple like really, really smart people who can now like go and
focus on other things that actually move the needle for the business and for the user experience.
Yeah, that really is the best use case really for automation or intelligent automation because one thing I like about the idea of I would say using
today's newest frontier models or generation or reasoning in problems like that is because
it can run scenarios.
Yup.
Like it could be like well you know if it's sophisticated enough, it can say, run the scenarios.
If these five flags were removed, what would happen?
Because it has awareness of the database and has awareness of its scalability of it.
You know, what the database actually is or what the language might be or how it's
sharded or not sharded or like whatever the scenario is for the application.
It can, it knows that and it can to some degree quantify or
try to quantify some of that reasoning so that when you get there like the PR, you're
mostly there.
Instead of having to go 100% of the way, maybe only go 50% or in this case 80%.
You've gotten 80% done and you only have to deal with 20%.
So hey smart fellows, smart folks, go and tackle this problem
that you only have to do 20% of the work versus 100%.
I would be so much happier, you know,
if I were that intelligent person
having to deal with that issue,
because it's important,
but not so important that you should spend 10 years
of your life doing it.
Yeah, and the remaining 20% is like the fun 20%.
Like that's the interesting problems,
the ones where you can actually like
wear your computer science hat and you know,
God forbid actually make use of some of the things
that you learned in college, right?
Yeah.
Okay, so that's one more layer of the day in the life.
This is a long day in the life, wow.
Code migration, code review, what's left?
What's left in this day of life?
This outer loop that you're gonna automate, is there more?
You know, I think we've covered most of it.
There's like specific pieces that we can get into,
but like broadly speaking, it's accelerate the inner loop
through an amazing like in editor AI experience,
like help automate the boilerplate, through an amazing in-editor AI experience.
Help automate the boilerplate, help automate the grunt work of doing simple things.
And then in the outer loop,
fully automate as much as we can.
So I think the last remaining piece here
would probably be some sort of automated remediation
of production incidents that happen.
And I think there-
So you wanna automate incidents now too?
Yeah, I think the entire So you want to automate incidents now too. Yeah, I think like the entire software development
lifecycle, we essentially want to be the platform
that performs all of that.
Like we're essentially building like an agent
construction mechanism into source graph,
where like the code review agent is just kind of like
patient zero for that.
Like that there'll be like a first party agent
that is customizable and composable
so that people can tune it
to the specific needs of their enterprise.
But that's just the beginning of the long and fat tail
of toil that exists inside enterprises.
So like the common points here will probably be things
like code review,
test generation, issue and incident remediation.
Those probably are the most salient things for us.
But then inside every org, there's
kind of like a long tail of very specific things
that you need to do for that organization.
Like if you're a big bank, you might have compliance
requirements or checks for proper handling of PII.
If you're in healthcare, there's a lot you have to do around data privacy and making sure that
that doesn't get leaked to a part of the system. Every industry has its own set of things that they
need to enforce in this lifecycle. And so the way we've approached this automation problem is we're not trying to build like
a one size fits all thing that just does like code review.
What we really want to build is like an agent authoring platform to enable our customers
to go build the automations they need to build to address the toil that exists in their organization.
So like the runtime, they still write the course to it.
Where the runtime we provide, like the common Lego bricks and building blocks,
you like a common inference point, a common context, fetcher,
a common way of composing these agentic chains.
And then their, their developers are essentially able to go and like with very low friction assemble these blocks into the things that target the toilet they experience.
I love that. I kind of joked earlier about, you know, smart cron jobs. You feel that's kind of what this is? I mean, maybe at some level there's some things you do with cron jobs where you script something and you automate it
based on certain timing and it runs and then you observe it
and did it fail or did it pass?
And there's logs and there's obviously the effects
of the change that you construct.
Is there a day where we will no longer write simple,
let's just call them dumb cron jobs
and we'll write smart ones that are very agent-like?
I think there will be.
And I think the point that we want to get to the way we've constructed, uh, our
code review agent is to make it so that.
For the most part, you don't have to think about the imperative logic
of the review agent itself.
What you can do with source graph is you can define these rules that should hold in a certain
part of your code base.
So a rule is just a declaration of some invariant that should hold across a certain set of files
in your code base.
So maybe it's something like, it could be as simple as like, hey, if you are writing the name of your company, make sure
to camel case appropriately, because a common misspelling is people forget to camel case,
and that's not good.
Or it could be much more complex than that.
Hey, here's a complete style guide for writing code in Go.
We've like copy and pasted the open source Google style guide
because that's treated as canonical.
Make sure all the Go code in the organization
follows this style guide to the best of your ability.
So almost every invariant that you want to describe
is describable in a statement like that.
And we want to give
people the power to define these rules in one place and then have them be enforced across
their code base. And the method mechanism enforcement, there'll be at least three places
where the services one is inside the editor. So like make the rules part of the context that guides the generation of code inside
the editor so that you feel confident that if most of your developers are vibing their code into
existence, that vibed code actually follows your organizational standards. So you're not going to
have to rework 90% of it because it's using some framework that
you don't want to use.
You want to use this other framework or whatnot.
The second layer is code review.
At review time, if anything falls through the cracks or if someone wrote that code manually,
enforce these rules to make sure that nothing gets committed that causes any of these rules
to break.
Then the last thing is what you're describing with the cron jobs. Like you should just have a multitude of background agents in flight at any given point in time,
just constantly crawling your code base to ensure that these rules hold. And if they don't,
if there's ever a point where they don't hold, push up a PR and then automate the review of that PR
so that a human can come in and be like,
okay, yep, yep, yep, you checked the right things. This looks good. And like you've added the appropriate unit test to make sure that nothing breaks.
Looks good to me to ensure that those invariants hold. I think the combination of these three methods of enforcement will allow your technical leadership, like the people who carry the
architectural vision of the code base, to really enforce consistency and coherency across
the entire code base.
Not just catching the typo things, but you can actually describe architectural constraints.
If it talks to this component, it should go through this interface because that's the
intention of the architect.
It's very important that we maintain this interface boundary because that allows us
to keep the code clean and changeable moving forward.
That's the sort of thing where without this system, without AI, it would have been like
a losing battle because you only have a certain number of senior developer hours in the day.
At some point, some PR is going to sneak past the reviewer because they got other stuff
to do.
Then at that point, you've broken the invariant and then things slowly decay over time.
That, to us, to get back to the industrialization point, that's a linchpin of being able to
make coding
a truly industrial craft.
Another point of reference for us is,
if you've ever read the Mythical Man Month,
that's the classic.
I have not.
Oh, you haven't?
Oh, you gotta read it.
Oh man, I'll put it on my list.
The book was written in the 70s about challenges
that developers faced building mainframe software,
but it is still like, it's like
98% applicable to the way development currently works today, which is like the thing that
holds software back and prevents these actual economies of scale from being realized is
the loss of architectural vision and coherency that occurs when you scale up the size of
the team working on the software and contributing to the code base.
And so this is a perennial problem that has plagued software development since the mainframe software of the 60s and 70s,
which this book was written about. But the same general principles apply today.
With our system, this system of rules and invariants enforced through AI at different ports of the SDLC,
our mission, our vision here is to give the architect
of your code base a lever by which they can enforce
the constraints and architectural rules
that must hold in the code base to preserve
the overall kind of like cleanliness and design
of the code base. Like now for the first time in history, I'm not saying it's a solved problem,
we're still in the process of solving it, but like now for the first time in history, we have
line of sight to solving that. And I think if you're playing around with these technologies
and using them day to day, you can see how we get from here to the point where people can define these rules in one place and just
constrain and control the evolution of the architecture as code grows.
If we can do that, then we basically solve the fundamental problem of the
mythical man month. And that ushers us into a new age of software development. We're literally transitioning
from cottage industry or artisanal craft
into actual industrial processes where
we do have economies of scale and we do have efficiencies where the team can actually,
you know, not even not get slower but actually get faster the larger the code base becomes.
Yeah, I'm seeing this world. I mean, are we there yet?
Let's just, let's stop for one second.
Are we there yet?
Not quite.
No, not quite.
And we're in the midst of building up like,
you know, this Gigafactory.
Yeah, okay.
For us, it's really about like building these factories
in collaboration with our partners.
So like one of the things that differentiates us is like,
first of all, we're thinking about this problem at all.
And second of all, we can sit down with like the technical leadership at, you know, the booking dot coms of the world or the Palo Alto networks of the world and make this system work well for them.
Because building building the gigafactory is very different than building something that works inside like a small code base with a couple of contributors.
And I think, you know, the low hanging fruit is
definitely like the small code bases, which is great. Like I'm glad there's like a bunch
of people working on that and making that better. But I think where we see a strong
alignment with our mission and what we want to do in the world is really enabling software
development to go much more smoothly and quickly inside these large like industrial scale code
bases. Well, what the picture it seems you've painted,
if I can just analyze a little bit,
is if I'm an IC in any of these organizations,
and they're, like you mentioned,
there's a certain interface,
I may have to write code to,
it always has to go through that way.
So the agent that may confirm that that's a truth
or not for my code base.
When I deploy code, if I can do my REPL,
my interloop, my creativity,
and I have insurances outside of it,
like no matter what I throw out there,
outside my circle of trust,
let's say within my own brain or my own team's brain,
of innovation, creativity, moving the ball forward,
if I know whatever bounces back
holds true against rules that my organization trusts,
holds near and dear,
is orchestrated and designed by the most thoughtful people.
If I can be creative within confines, it's like you still have constraints, but now it's
not rogue because you can add more and more smaller teams to be creative and to throw
things outside their circle of trust and get bounced back against this wall, so to speak.
Yeah, and the robots should be able to handle
somewhere between 90 to 100% of the back and forth, right?
Because before, this would be very painful
because you push up your new feature,
you're all happy about it,
but then some architects would be like,
no, it violates this constraint, this part is not clean.
You have to go and do a bunch of rework to fix it.
And then you push it back up again, there's more comments
and this process just drags on
and both sides get increasingly frustrated
because finite willpower at the end of the day.
But now you can have the robots handle
like the vast majority of the back and forth.
And so, yeah, now it's like, you focus on the fun parts.
Worst case you're mad at Cody or whatever.
You need the next robot, right?
Yeah, and if you get mad at Cody,
the brilliant thing is that like,
hey, just tag in Cody to go address Cody's comments, right?
Nice, yeah.
You'd mentioned in the pre-call
specifically some things you do
for enterprises and government agencies.
What is it that those, I mean, we probably talked about some of it, but what is it that
those organizations need that has only been uniquely designed and engineered by Sourcegraph
to help them do better?
Yeah, so there's two things here.
One is the context of awareness, like building that in as like a first-class citizen. I mentioned
that we were first to market for that and we actually have a way of delivering that really
effectively into the enterprise. Like Sourcegraph has been self-hostable for many years now because
a lot of our early customers were very large enterprises. So we've built all this DNA
around delivering the best context for human developers in the form of code search and code navigation
into all these different deployment environments.
And we can essentially offer those same context
vectors as a game-changing tool or capability for LLMs as well,
like any sort of AI code generation or question asking.
The other facet of this is just the security and compliance requirements that a lot
of these larger enterprises have to comply with or are constrained by. So in terms of being able to
deliver something into a purely self-hostable environment, I think we're basically the leader
in being able to do this. Even Microsoft or Copilot, with all their kind of like enterprise branding,
there's still components of their infrastructure
that have to be tied to their cloud.
And that essentially rules them out for a lot of,
frankly, like the largest software engineering
organizations in the world.
They're ones that you haven't heard of,
but they're literally like powering half the economy
and they have very stringent security
and data privacy constraints
So you can self host everything we talked about today?
Yes, yes, and it's by design like that's not obviously by accident because you wouldn't do it by accident because that's very hard
Yeah, it's very hard to self host
So does that make it challenging to build source graph to to sort of have to deployment?
Targets or I guess infinite
really technically.
Yeah.
Because you've got your own cloud I'm sure, which I can go there and be a cloud customer
or you're hosted it versus my own hosting.
It's absolutely a challenge and it's something that has slowed us down a bit.
Like the in-editor experience I think, We've had a bit more challenge in pushing that forward
than we would have.
We ideally would have liked to move faster on the in-editor
UX, but because we're targeting a variety
of different editors and deployment constraints,
that has constrained what we've been able to do there
to a certain extent.
But I think now we're at the point
where we've figured out all the most important parts for how
to deliver this into large scale organizations.
And so now revisiting the core UX,
I think we feel pretty good about the degree to which we
can implement all the things that people want in terms
of AI code generation
and agentic automation within the editor.
But it does remain an ongoing challenge, right?
It's not a trivial problem by any stretch to solve.
And it's something that a lot of the upstart people
trying to do things with code plus AI,
they have not solved yet.
And I think we'll take some time for them to solve.
Is your most popular editor VS code
or your target where a lot of the generation is happening?
I think the probably by usage numbers,
VS code is the most popular,
but we have a ton of JetBrains users as well.
And we also support Visual Studio and Eclipse,
which, you know, it might be surprising
to some folks in your audience,
but like there are like very large government agencies
that are still on Eclipse and they are constrained
by factors beyond their or our control
and being able to deliver this into that editor
is kind of like game changing.
We're literally bringing the latest technology
into a very kind of like old school legacy editor.
And it just like, it completely upends the speed
at which they're able to move inside those code bases. I know that one of your teammates, I think, went to Zed.
I'm thinking like, why wouldn't you just build your own
editor or acquire one?
You know, like why don't you just acquire Zed
and make it easier?
Yeah, so that's usually-
And then to be the best editor ever out there.
Yeah, so I think the teammate you're talking about is- You can mention his name if you want to. Is Thorsten, Thorsten Ball. Yeah. So I think the teammate you're talking about is Thorsten Ball. He's
awesome. He actually just bounced back actually. I saw that. He rejoined us. That's why I felt
comfortable mentioning his name in this scenario because he's back again. Yeah. And look, he's
awesome. I think, look, I love the Zed folks too.
We've had one of them on our podcasts back in the day.
I really love what they're doing.
I think it's a fantastic editor.
And I sincerely hope that it becomes really big.
I think they have a unique approach here.
They're not a VS Code fork.
They're really kind of like rebuilding an They're really kind of rebuilding an editor up from
kind of like first principles and focusing very much on performance, which I absolutely love to see
as a developer. And yeah, I hope they succeed. I would love that to be an editor that I could
use someday. As far as our business is concerned, the way we view the in-editor experience is
I think maybe this is a difference in our vision for the world compared to some of the other
players in the space, but we don't think that there's going to be a single code editor
single code editor that is universally adopted for every language and every platform in the future.
Much less like a closed source one.
I think open source in the editor remains important because it's so near and dear to
the inner loop and the individual developer workflow.
And so when we think about the type of in-editor coding
experience that we want to enable, we start from a point of wanting to deliver
that experience into the best editor for your particular ecosystem or your
platform. I guess like that's somewhat of a contrarian view these days because
you know that there's all these like VS Code forks floating about that,
you know, frankly speaking, they've done a really good job on some of the core UX,
and kind of pushed the frontier in terms of showing people what's possible with AI.
But the long story, I guess the short of it is, for us, we are thinking about building for the large codebases that make the software that power the world.
Those are multi-editor environments.
They're also environments where it's not just the context inside your editor that matters.
It's actually connecting it back to the overall context of your organization and the entire code base. And so the way we're approaching about building for the editor is to bring the best AI UX, you know, in the field in a way that feels native to each of
the editors that we support. We eventually hope to support all the editors out there.
That makes sense. I mean, in essence, what you said was go where the developers are,
right? Versus force them into your stadium, you're going to all stadiums.
Yeah. A world tour.
Yeah, basically.
And part of this too is like, when we look at the UX
of how people employ AI inside their editor today,
we don't see anything that fundamentally requires
a different editor.
Especially looking forward with like model capabilities evolving as quickly as they are.
I would say our point of view is that every six to nine months,
there is a step function increase in LM capabilities where the assumptions that you had to build around in terms of
like what sort of like user experience you enable or what are the facilities
that you have to build around the model are those entirely change every six to
nine months because the the kind of like gaps that you had to fill because the
previous generation of models couldn't just one shot the thing that you're trying to do,
or couldn't just like find the right tool,
those constraints no longer exist.
And the constraints that you have today
in another six to nine months won't exist either.
And that has been the pattern since the release
of ChatGPT.
So like we've seen a progression
of these step function models.
Like there was GPT 3.5, GPT 4, then Cloud, Cloud 3.5, Sonnet,
and now 3.7, Sonnet.
I think each of these has moved the needle in a way
where a lot of the prior assumptions and constraints
no longer hold.
And so when we look at the ideal editor UX today,
looking forward, we actually think
there's a very clean way to integrate AI-generated code
into every editor.
That doesn't involve a lot of extra UX Chrome or things.
It's mostly just relying on the model reasoning capabilities,
providing it the right set of tools
in the form of both local tools and tools that call out
to code base-wide context fetching.
And then having a constrained set of hooks into the editor.
How do you render a proposed diff into the editor buffer?
How do you render an edit suggestion into the editor?
These sorts of things, which, you know, look,
you have to build those integrations so they work well,
but it's a very narrow set.
It's a narrow API that we think we can implement really well.
And I think the thing that we'd like to build
is kind of like an AI kernel
that you can port to every editor
and maybe even to the command line.
Cause like, it's a lot of the same usage patterns
that you see there where it's like,
you wanna describe the change that you want.
You wanna see what edits are proposed.
You wanna accept those.
And then you want to continue doing what it's doing,
but keep you in the loop as to like what it's doing
and how that affects the rest of the code base.
Yeah, so a certain framework that happens
no matter what, there's certain interfaces you expect.
Yes. No matter if it's your Zed what, there's certain interfaces you expect. Yes.
No matter if it's your Zed editor,
because I'm telling you to acquire it,
or suggesting that you should,
or every editor out there that developers use,
no matter where it's at, or even legacy ones,
or I suppose lesser used, I should not say legacy,
like Eclipse, you know, there's still lots of folks
using Eclipse for good reasons, I'm sure.
Yeah, yeah. For good reasons.
If you think about, like there's another analog here,
which is like, a couple years back,
a big innovation in the editor space
was the introduction of the Language Server Protocol, LSP.
And that showed that, hey, you didn't have to build
bespoke code navigation for every single editor.
Before that, it's like, oh, I wanna use this editor
because some hacker out there
spent, you know, night and day over the course of like
weeks or months hacking into, you know,
this compiler to rip at the internals.
And now like that editor supports really good
go to definition for my language of choice.
LSP came along and basically said like, look,
fundamentally, what do you need to do?
You need to go to definition, you need to find references, and then there's kind of like a short list of other functionality that's really useful to have.
We can abstract that away and make that feel native in every editor so that you don't have to reinvent the wheel.
And I think something like that is possible with LLM- driven cogeneration.
Well friends, I'm here with Samar Abbas co-founder and CEO of Temporal. Temporal is the platform developers use to build invincible
applications. So Samar, I want you to tell me the story of Snapchat. I know they're one of your big customers,
well known, obviously operating at scale,
but how do they find you?
Did they start with open source, then move to cloud?
What's their story?
Yes, Snapchat has a very interesting story.
So first of all, the thing which attracted them
to the platform was the awesome developer experience
it brings in for building reliable applications.
One of the use cases for Snap was Snap Discover team,
where every time you post a Snap story,
there is a lot of background processing that needs to happen
before that story starts showing up
in other people's timelines.
And all of that architecture was built composing using
queues, databases, timers and all sorts of other glue that people kind of deal with
while building these large-scale asynchronous applications and with
temporal the developer model the programming model is what attracted them
to the technology so they start using our open source first but then
eventually start running into issues because you can imagine how many snap stories are being posted
every second, especially let's say on a New Year's Eve. So this is where temporal cloud
was a differentiated place for them to power those core mission critical workloads, which
has very, very high scalability needs. Although they started with open source, but then very quickly moved to Temporal Cloud
and then start leveraging our cloud platform.
And they've been running on top of Temporal Cloud
for the last two, three years,
and then a pretty happy customer.
Okay, so maybe your application doesn't require
the scale and resilience that Snapchat requires,
but there are certain things
that your application may need,
and that's where Temporal can come in.
So if you're ready to leave the 90s
and develop like it's 2025,
and you're ready to learn why companies like Netflix,
DoorDash, and Stripe trust Temporal
as their secure scalable way to build
invincible applications, go to Temporal.io.
Once again, Temporal.io.
You can try their cloud for free
or get started with open source. It all starts at Tempor temporal.io. You can try their cloud for free or get started with open source.
It all starts at temporal.io.
Let's dig into, I think something
that you have a unique perspective and vantage point on,
which is the frontier models, what is being used,
what's being deployed.
You obviously give access to all the frontier models from cloud to sonnet, you know all the different ones
You know, what is most useful to developers? Do you offer them all because they all have unique perspectives on code generation
Is there one you particularly like best?
What are you hoping for in terms of frontier models like open source not open source? Yeah, you're a deep seek user
I don't see it in your lists. Yeah. Are you a deep seek user?
I don't see it in your list.
Yeah.
That's a swath of things.
Let's go from there.
Yeah.
So the landscape today, you know, March 5th, 2025 is-
Could change tomorrow.
Could change tomorrow.
By the time this comes out, maybe it will be different.
But that's why I say the date because, you know, I want to make sure that I'm pegging
the statement to a specific point in time.
The preferred model for our user base
is the Claude Sonnet family of models.
So we have a lot of users still using 3.5
and a lot of users using 3.7.
I think there's been some interesting discussion
around like 3.5 versus 3.7.
For instance, like I saw a bunch of like cursor cursor users tweeting that 3.7 didn't feel like
a big improvement. In some cases, it's worse than 3.5. For the Kodi users, it's actually been the
opposite. And I think it's actually because Cursor has done more legwork around optimizing
their prompts around 3.5. We've done less of that because we've been more model agnostic in our approach.
And what we found is 3.7 just has certain capabilities,
especially around tool use, that have gotten substantially
better.
And so that's why we're building the future of Kodi
is really going to be built around the set of capabilities
that are now unlocked with 3.7 Sonnet.
So we also see a lot of usage
of kind of like reasoning-oriented models
for like trickier problems or problems more
around code-based understanding,
where like a non-reasoning model might, you know,
make a hallucination or an incorrect interpretation,
like a shallow interpretation of some set of context.
So we do see some usage of like the reasoning based models
like, you know, O3 mini and things like that.
DeepSeek is really exciting.
I think, I actually think it is available now
in the model dropdown.
Yeah.
Marketing says no, but I'm not inside the application.
I'm on slash Cody of sourcegraph.com and it's not there.
So that's why.
Oh yeah, yeah, yeah. It made the assumption. Probably need to be inside the application. I'm on slash Cody of sourcecraft.com and it's not there. So, that's why.
Oh, yeah, yeah, yeah. Probably need to be inside the editor.
DeepSeek is also really exciting because it's, I think, at the frontier
of the open source models. And like the open source models are also
very interesting to us because, one, it's like,
you know, we always like supporting open source,
especially at the model layer that makes it attractive from a variety
of different points of view. I think Llama is also an awesome family of models that is available through Kodi
and that we also see a lot of future potential around. The second reason why DeepSeek and Llama
are very exciting is their fine-tune ability. So there's a bunch of use cases inside the editor
like suggesting the next edit or applying a proposed change
into the editor, where these open source models provide
a very good foundation for tuning a very fast model that's
specifically targeted at those use cases, where the use case
is very latency sensitive.
So we want to preserve the context awareness
and preserve the intelligence as much as we can.
But since it's constrained to the specific use case,
it doesn't have to have as much of the general intelligence
that like a frontier model has.
But because we have relaxed that constraint,
we can push the latency and cost down to the point
where it feels more instant.
Can you explain to me how, or is this proprietary,
how you interact with the various models?
You mentioned you're agnostic, so I imagine just like you
want to go where developers are, which means
whatever editor they're using.
So that means that you've got an agnostic approach
to how you gain your context via source graph
and apply that to the age or the model
that's gonna give back the generation
or give back the reasoning.
Can you explain to me how that works
from prompt to source graph context to the model
and back again?
Yeah, so model choice was something that we prioritize
as like a key original design constraint around Coding.
We basically made the bet that the model landscape
would shift and evolve.
I think that like prediction has panned out.
And so we never wanted to be overly constrained
to one specific model or the current generation of models.
And so we architected the code in such a way
that we could easily introduce new models
and also customize the prompts that we use for each model
so that we can use the best prompt structure
given each new model that comes along. And so because of that, one, it's advantageous if you're
a user. If you want to play around with the latest model on the day it drops, most of these models we
make available within 24 to 48 hours for users of Kodi Pro. And then if you're an enterprise user,
this is also helpful because sometimes you have constraints
around which model you can use.
Like there's a bunch of enterprises, for instance,
that don't wanna use DeepSeek because it's,
the progeny of that model just rules out.
For whatever reason.
Yeah, for whatever reason.
And so we don't have to use DeepSeek in that case.
Some people cannot use Anthropic or OpenAI.
Maybe they're tied to Azure OpenAI or AWS Bedrock.
So they're constrained to the models that
are available on that platform.
And we can easily make the functionality available
given their model constraints can the person or persons over the org
Sourcegraph org I suppose can they delimit or remove the opportunity to even select a model they can't use
Yes, that's cool. You can do that at the admin level. That makes sense. Yeah
That's the way to do it man. Yeah. Yeah any any plans to is there any more on that front? That makes sense.
a lot of exciting things coming down the pipeline. And so we're kind of like building our future
and editor experience with an eye towards
where we see the puck going in terms of model capabilities,
which is very exciting for the future.
I think the thing that I'll say here is that like,
AI has already gone through two to four
many disruption cycles.
Like we typically think of disruption cycles as being on the order of like years at least, right? But the shift in the step function improvement in model capabilities that we see every six to nine months or so,
essentially like resets the game for what the ideal UX is at that cadence, like every six to nine months.
So, you know, rewind to the ancient age of...
One year ago.
One year ago. 2023, right?
2023, copilot was king. They seemed unassailable. Everyone wanted autocomplete.
No one even cared about like chat or context awareness in that chat, right?
Because autocomplete was where it's at.
You want it to be fast.
You want it to be snappy.
You want it to be like a little bit context aware, but people just wanted it to be fast,
right?
And that was like the UX paradigm that was dominant at the time.
And then in 2024,, the paradigm completely shifted.
I think it first shifted with GPD4, where that was a model that had a step function improvement
in its ability to one-shot applications.
So now it's like, why would I sit there like auto-completing stuff when I could literally
just generate an entire mini-app from scratch?
It also got a lot better at incorporating context.
So now you can do context-constrained code
generation, which we deployed.
And it had massive success in the enterprise.
But then there were still certain things
that it couldn't do well.
The code was often sometimes broken or didn't quite compile.
And then when Sonnet rolled out, that
was another step function.
Now all of a sudden sudden these things that were kind of just beyond the capability
are now firmly within, I can trust this thing to reliably emit JSON, for instance.
That's a solved problem now.
People used to write entire thought pieces around, how do you constrain the model output
to produce valid JSON?
These days, it's like, you don't even think about that because the model has been trained
to treat that as a solved problem.
And I think now we're seeing it in terms of in-editor agents.
That is the future and that's what we're building towards.
A year ago, I think if you're pushing agents, there were certain companies that were,
we're going to build the agent that eliminates the developer entirely.
I think now people recognize that as they were kind of selling beyond their existing capabilities at the time.
It made for a great marketing moment. They got their name in the headlines,
but there was disappointment in the actual project experience.
Now with the newest set of models, we're seeing this approach just like, this is kind of like
the new paradigm. That's what we're building for. We have kind of like a wait list of people
who want to get access to the thing that we're currently building in collaboration with a
lot of these frontier shops.
step function improvements and model capabilities. It's just been really exciting to see that
and sort of like ride the wave of the development
and maturation of this technology.
This wait list, is it a secret?
These details behind it, can you enumerate at all?
When is this episode going to go now?
Please at all?
Next Wednesday?
Oh, next Wednesday.
Oh yeah, it's closed right now,
but I don't know, like DM me on Twitter.
We're playing this like very ad hoc right now, just cause I don't know, DM me on Twitter. We're playing this very ad hoc right now,
just because, I don't know, what we've learned over time
is that new Skunkworks projects like this,
it's never good if you make them overly formal too quickly.
So we like it to be scrappy.
We like it to be just a handful of devs kind of like locked in a garage-like environment
of some sort, just like hacking on stuff.
But you know, I think we'll have more to share here soon.
You've already long betted on the UX inside the editor, so that's not changing.
It's an improvement or maybe something like that
with a particular model or model developer.
The way we think about it is,
what I'd recommend other organizations do,
is you want to just assume
that the reasoning capabilities and the latency
will continue to improve at approximately the same clip
as they have been.
So what do you do?
To a certain extent, you have to hack around existing
limitations, but you don't want to invest too much in those
because another six months rolls around,
and those are kind of obviated. I think where you do want to invest too much in those because another six months rolls around and like those are kind of like, you know, obviated.
I think where you do want to invest in right now is very high quality tools, tools and
capabilities that can be composed with the rapidly advancing reasoning capabilities of
frontier models.
And so it's essentially, for us, it's kind of nice because we've essentially
been building these tools the entire time at Sourcegraph.
It's like we're building tools to help human brains explore
the code base effectively and build up a working
understanding of it and write effective code, pattern
matching against the existing patterns, and validate that
that's correct
and consistent within the style of your organization, the rules in your organization.
Now increasingly we're seeing the LLM brain being swappable in for the human brain in
a growing number of tasks.
Every kind of advancement in reasoning capability or context awareness or pattern matching capability
advances the frontier of where an LLM serving as the coordinator of these tools, the orchestrator of these tools
can now automate something that previously a human brain would have had to attend to. And so the way we view it is like we are we are really investing in
the tooling ecosystem and infrastructure that we build around the model with the assumption that
like all the frontier labs are going to be pouring, it seems like all the money in the world, right,
like billions if not trillions of dollars, into pushing
the advancement of this reasoning technology as far and fast as they can.
So like, that's the big wave right now.
And the way we're designing our platform is to just like provide the best tools for
accelerating and automating the software development lifecycle under the assumption that there's
going to be kind of like,
you know, maybe Moore's law is not the right analogy here,
but I think you can assume some pace of like advancement
and reasoning capabilities for at least the next,
you know, couple of years.
Yeah.
Well, I'd imagine that you're getting this access
and you can call these folks partners
and work in this degree because Sourcegraph is a deployable target for all of them essentially.
They can't assume that I go to the cloud website or to the OpenAI website or to my own way
is that Sourcegraph's Kodi is just one more way to deploy and distribute those LLMs and those
Those reasonings so to speak to to developers where they're at, you know, I'd imagine that's probably
true, can you can you speak to how saturated your user base is with like of the
Developers who are developers in quotes to source graph. I don't know what you call your people, so to speak,
your user count.
I don't know, I'm trying to get your lexicon in my brain.
Do they all use Kodi?
You know, is everyone using Kodi?
Is Kodi one of your most successful products?
I would say Kodi has been, I think by far and away,
the most successful product that we've shipped,
with CodeSearch being kind of like a close second. far and away the most successful product
and very, very valuable for large messy code bases. But in terms of like overall user account,
a user count, developer count,
I think Kodi has been extremely successful.
It's not, there's not a hundred percent overlap yet.
Like we do still have some people using code search
who don't use Kodi or some people using Kodi
that don't use code search,
but we see an increasingly number of people
kind of flow between kind of like
generating code with AI to like looking up context myself to make sure that I understand
like what the code actually does and what APIs it's using, et cetera, et cetera.
And yeah, I would say like we are seeing an increasing amount of people
starting at one point and getting pulled into the other.
And for that reason, we've also made this AI
question answering interface available
in the web application, where you can go to any repository
in the source graph UI and ask questions
when you're viewing the code
in the source graph web application.
So just trying to connect the dots, right?
We kind of developed,
Kodi is a separate independent thing,
but increasingly our user base is really pulling us
to integrate it into the unified source graph experience.
Yeah.
It's hard to see from the outside how Kodi is independent
because you can go to slash Kodi, which is what it is, but the headline does not say by Kodi is independent, because you can go to, you know, slash Kodi, which is what it is,
but, you know, the headline does not say, buy Kodi.
You know, the headline, I don't even know what it says,
but it doesn't say that, you know?
I think it's like, the enterprise AI code assistant,
you know?
And then somewhere in the line,
it's like, with Kodi, you can do X, you know?
There's not this named thing where it seems like
it's a standalone product that also is deeply integrated.
Yeah. I say candidly, like our marketing messaging hasn't been the greatest.
I think there's some confusion over what the overlap is.
Very much, yeah.
And that's something that we're trying to address.
I think one of the challenges for us is like, we built so much,
so many capabilities around like accelerating
and automating the SDLC over the years
that there isn't like a succinct way
of describing what the whole platform does
other than it accelerates and automates your SDLC,
especially if you're in a large code base.
But that's not as punchy as-
What is SDLC?
Software Development Lifecycle.
I thought so, but I just wanna make sure. Yeah yeah, this was something on the inside. I'm like
Okay. Yeah
What's left unsaid I know that we've talked a lot you've got short amount of time you got some other words somebody the place
Is the B. What can we say in closing? What's on the horizon that no one knows?
Or knows less of that chicken mentioned in the closing.
I think, you know, for us, it's it's really mapping back to these two loops,
like the inner and the outer loop.
So expect a huge like overhaul
and improvement to the in-editor experience
that we provide around, like what the AI can do when you're coding,
when you're vibe coding,
or when you're writing code in your editor,
and expect an increasing amount of automation
around that long and fat tail
of the outer loop of the SDLC.
And I think philosophically, the way
we're trying to build this is we think
the in-editor should be as open and adaptable as possible.
So open source, cross-editor, these
are the things that we value as developers.
And I think pre-AI, like every developer would say,
they valued that.
I think the first generation of AI coding assistants
has kind of muddied that a bit, because a lot of folks
have kept their solution closed source.
But we think the end game here is going to be open.
And we aim to build in the open here.
And then in the outer loop,
I think the way we approach the automation here
is that we're not trying to anticipate
all the different ways in which we can compose
the different building blocks.
We want to build the building blocks
and use those to provide a handful of first-party
automation tools like the Code Review Agent.
But at the end of the day,
we wanna hand this platform over to our users,
poor developers after all,
to go and combine these things into the things
that tackle the things that eat up your time
and produce toil for you today.
So that is our philosophy.
We've been doing this for over a decade now.
So we've fully internalized the challenges
and what a solution needs to be in order to work
in the context of a large, messy code base,
as well as how to deploy that solution
into your environment.
And like I said at the very beginning,
we chose this domain because we felt that easily spend
our entire lives working on this.
This is where our passion is.
And I see a long and bright future ahead for us
as a company, essentially bringing
all the biggest, most important code bases in the world to the point
where they actually can achieve these industrial
economies of scale.
Industrial economies of scale.
I love that.
I mean, I think it sounds cool.
Yeah.
I'm excited about, you know, just the economies of scale
really at that point, because like, it's generally been harder to add more developers
to a team and get better.
You might go faster but not necessarily always better
and it's not their fault necessarily
because of willpower, you know, right?
Or just like human nature, you know, like I don't like you
or you don't like me.
It's hard to work together.
Yeah. Or I had a bad day or my weekend sucked.
Yep. Whatever makes you less effective or productive.
Absolutely. And you should absolutely go read the Mythical Man Month,
because what you just said there is kind of like the core thesis of the book.
It is adding more people to a software development project makes it take longer,
which is it is the truth that has held true since the dawn of software development.
It is the fundamental challenge that we aim to solve.
And it's frankly like the thing that's holding back
the quality of most software in the world today.
Very cool.
Thank you, Bion for sharing the depths of Sourcegraph
and your journey.
It's been fun.
Thank you.
Absolutely.
Thanks for having me on again, Adam.
When I first thought about the phrase,
when I first said it out loud,
industrial economies of scale, Sounds kind of scary.
Sounds kind of, you know, bureaucrat corporates buzzword potentially.
But as I thought about it more and more, the ability for us to not just scale our teams
to slow down, but to scale our teams to go faster to achieve the mission to create better
software to really work on this inner loop
and this outer loop that beyond talked about.
You know, I have the luxury of marinating on this podcast for a week because we recorded
last week we shipped this week and that's how it works, you know, so you get it firsthand
right now.
But I've kind of been sitting on this for a week.
And I'm telling you, when I think about achieving industrial economies of scale for software
teams, I kind of get excited because every time I've thought about in my career, adding
more people or adding more things or whatever it might be to my teams.
Now mind you, I don't operate in high performance teams with large code bases, but I do think
deeply about software and I do think deeply about software with many people on this show.
And so by proxy, I think about this by proxy, I think about how would I accomplish the mission?
How would I accomplish solving customer problems?
How would I help organize and maintain morale to to do the things that we love doing most as
software individuals, as software teams?
And this is resonating with me.
I don't know about you, I'd love to hear your thoughts.
Zulip is where it's at.
Go to changelog.com slash community and join us in Zulip.
That is our Slack alternative, real time threaded conversations for teams.
Hey, we're a team, we're a community, and this is where you can call your home.
So changelog.com slash community.
It's free.
And I want to see you in Zulu.
Okay.
Big thank you to our friends over at Retool Retool.com and big thanks to our friends over at
Augmentcode.com and big thank you to our friends at Temporal Temporal.io. Yes, also big thanks
for friends at Fly.io the home of changelaw.com Fly.io and to the Beat beat freak in residence those beats are banging break master cylinder
Bring a new beats love those new beats
Okay, there is a bonus for a plus plus subscribers learn more at changelog.com
Plus plus it is better
Yes, it is better
changelog plus plus bonus content closer to the metal stickers in your inbox, and a free coupon for a hug IRL.
Stick around if you're a Plus Plus subscriber or go to changelog.com slash plus plus and subscribe.
Okay, that is it. This show is done. We will see you on Friday. I had forgotten that you open source coding.
And I feel deeply like, like, how do I not know that?
But there's so much that happens in our industry
and I can't remember it all.
Yeah.
But I think what made me not realize this is like,
to get, it seems like to get into Coding,
I have to have a Sourcegraph account.
Yeah.
And then I don't know how to have a Sourcegraph account
because I try to have a Sourcegraph account
that says that email doesn't exist
I'm like trying to sign up even I've yeah doing doing it during the show and I was like what's happening here
So I don't know how that works or what's going on there, but share with me this wart that yes
I've exposed we are fixing this so here's the historical context couple years back
We made the really bad decision
to eliminate any self-servability in our platform.
And at the time, it was just a matter of focus.
We're bandwidth constrained,
and a lot of the pull that we saw in the market
was for enterprise code bases.
Because those are the people for which the pain
that we saw becomes existential, right?
It's like, you feel the pain of context gathering in a small code base, but you know,
you're still small.
The code base fits inside your editor.
It's not like the end of the world.
Once you reach a certain scale, then it's like, holy crap, we can't get anything done.
This is existential.
So that's where the pull in the market for us.