The Changelog: Software Development, Open Source - Building Zed's agentic editing (Interview)
Episode Date: May 7, 2025Nathan Sobo is back talking about the next big thing for Zed—agentic editing! You now have a full-blown AI-native editor to play with. Collaborate with agents at 120fps in a natively multiplayer IDE....
Transcript
Discussion (0)
What's up friends?
This is your favorite podcast, the changelog.
Yes, back again.
This time with Nathan Sobo talking about Zed, but not Zed by itself.
Zed, agentic, Zed AI, the AI native editor for all developers.
So you know Jared's a daily Zed driver.
I used it too, but Jared uses it literally every single day.
And we had to get Nathan back on for the latest.
The announcement, this launch,
Agentic is so cool.
I tried it out live on the podcast, you'll hear it.
And it was awesome.
And if you haven't yet, go to zed.dev slash agentic.
A massive thank you to our friends
and our partners over at fly.io.
That is the home of changelog.com.
You can learn more at fly.io.
Okay, let's talk Zed.
Well friends, it's all about faster builds.
Teams with faster builds ship faster and win over the competition.
It's just science.
And I'm here with Kyle Galbraith, co-founder and CEO of Depot.
Okay, so Kyle, based on the premise that most teams want faster builds, that's probably
a truth.
If they're using CI provider for their
stock configuration or GitHub actions, are they wrong? Are they not getting the fastest builds
possible? I would take it a step further and say if you're using any CI provider with just the basic
things that they give you, which is if you think about a CI provider, it is in essence a lowest
common denominator generic VM.
And then you're left to your own devices to essentially configure that VM
and configure your build pipeline.
Effectively pushing down to you, the developer,
the responsibility of optimizing and making those builds fast.
Making them fast, making them secure, making them cost effective,
all pushed down to you.
The problem with modern day CI providers is there's still a set of features, a
set of capabilities that a CI provider could give a developer that makes their
builds more performant out of the box, makes the builds more cost effective out
of the box and more secure out of the box.
I think a lot of folks adopt GitHub Actions for its ease of implementation
and being close to where their source code already lives
inside of GitHub.
And they do care about build performance
and they do put in the work to optimize those builds.
But fundamentally, CI providers today
don't prioritize performance.
Performance is not a top level entity
inside of generic CI providers.
Yes, okay friends, save your time, get faster
bills with Depot, Docker builds, faster get up action runners and distributed remote caching for
Bazel, Go, Gradle, Turbo repo and more. Depot is on a mission to give you back your dev time and help
you get faster build times with a one line code change. Learn more at depot.dev. Get started with a seven-day free trial.
No credit card required. Again, depot.dev.
And we'll be right back. And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back.
And we'll be right back. And we'll be right back. And we'll be right back. And we'll be right back. And we'll be right back. And we'll be right back. I really looked forward to this all morning. It's always a joy talking with you, especially because you're in the trenches.
I mean, you are building something,
assembling the plane in the air.
I'm a user.
I've been hooked on Zed since,
probably since our last show.
Daily user.
And enjoy it.
And yet it's like crazy times, I think, probably,
to be building software tooling. I mean, it's like crazy times I think, probably, to be building software tooling.
I mean, it's probably never been easier to fork VS code
and raise a bunch of money and compete with you, right?
Yeah, yeah, definitely.
It's like every other week.
It's gotta be scary, right?
But I view it as a great thing though.
I mean, for me, like in a world where everybody wants
to fork another editor, having your own feels like a good spot to be, like in a world where everybody wants to fork another editor,
having your own feels like a good spot to be. Like, I like that.
Yeah, because I've been attracted by some of these new ones, Cursor,
being the most attractive. There's also Dev-in. Now there's also Codium, which is then Windsurf, which is probably being acquired by OpenAI. So there's lots going on there. But Cursor in particular,
being a Zed user, I'm like, actually being a VS code fork
is unattractive to me.
I just don't want to use that.
But compelling features,
so I guess I'll try it anyways.
I mean, you think these people will be fork in Zed,
don't you think?
Like shouldn't they just fork Zed?
It's a little harder for them to fork Zed
because we're GPLs, I think.
So if you want to build a closed source
derivative work of Zed, you can't do it. So you need to stay open source. I still think you, you want to build a closed source derivative work of Zed it be you can't do it
So you need to stay open source. I still think you you know, anyway, I don't want to encourage it honestly
I wonder if it'd be harder to raise the money if you were a fork of Zed and therefore open source, right?
Right. You have to stay open source. And we're doing everything open source,
including all the AI stuff we're doing.
But I think we're kind of doing it the hard way.
And we have been since before AI was even a big deal, right?
Of engineering our own system from first principles
from the ground up from scratch.
And I think that's made us a little slow off the line with some of this AI tech.
Because of course, you can focus on that
if you pick a platform that everyone's already using
and just zero in.
Where for us, we're maintaining the underlying platform.
But my hope is that over time and with this launch
of the agent panel, can potentially
prove some of that out
and then we'll keep going.
That kind of really understanding and controlling
and owning deeply in a deep way,
the editor from top to bottom
will open up a lot of opportunities
to do things a little bit differently,
have a differentiated offering
that's more deeply integrated across the board,
but including AI, obviously,
because it's this very hot thing, right?
Right, I was gonna ask you that,
because I remember last year,
you were kind of excited about it already anyways,
using it as a developer.
Yeah.
But I wonder as a product person, as a business owner,
do you feel like your hand is forced?
Like you couldn't work on something else really,
right now and stay competitive?
I mean, we are working on other things to be honest.
Like we just ship Git integration.
True.
And we have a debugger that's I think
going into private beta, you know,
at the time this will be aired,
like I think we'll be in private beta.
So you're multi-threaded.
Yeah, we have to be I think because we're,
but yeah, there's a huge opportunity here.
I mean, like there's a lot for exactly the reason you brought up like, Oh, I'm
a happy Zed user because I don't want to be writing code in a webpage dressed up
as a desktop app, but there's something compelling about the ability to have a
natural language conversation with a genius level golden retriever on acid and have it write
code for you, you know. Hold on. Let me digest that. A genius level golden retriever on acid. Okay.
Oh my gosh. Yeah, I can probably skip the acid part. Well, sometimes you can't help it. It's
going to go on acid anyways. Like you don't get a chance. Here's a tab for you.
Right.
I like golden retriever also because I do,
I was just in our last episode,
I was kind of confessing that I treat it like a little buddy,
which feels weird if it was like a full grown adult person,
but like as a domesticated helper,
you know, like a dog, like your best friend,
I don't feel so bad being like, you know,
good job little guy, but.
Yeah.
So I like that description.
You've been saying good job little guy?
I mean, mostly when I talk to you about it.
I'm like, yeah, I don't like actually type that in.
Well, it's just, yeah, I use the word golden retriever
just, I don't know.
I mean, I shouldn't disrespect it because.
Yeah.
It's incredibly powerful tech,
but I was just in there like, you know,
trying to get Claude to produce old text, new text,
and it's writing XML, old text, open tag,
and then the closed tag is new text.
And I'm just like, Claude, come on now.
It's like, come on.
Why are you doing that?
Why are you doing that to me?
Why are you making me be so liberal
in what I accept or whatever, right?
Like, but it's fine, you know?
Like, but that's what makes it interesting.
The other reason you don't wanna disrespect it
is because in the robot uprising,
and Adam has made this point many times,
like they wanna be like,
this is the guy that called us a golden retriever,
you know?
Yeah, exactly. On acid.
And I'll have the record.
On acid.
And I'll have the record.
He did not respect us, and now he will respect us.
You did say genius level though,
so you'd be complimented. That's true respect us and now he will respect us. You just say genius level though. He complimented.
That's true.
Genius level gold retriever on acid.
It is such a mixed bag though, isn't it?
It's, yeah.
And that's what makes it really fun and mysterious
and frustrating and all of the above
to like build product around so far, you know?
And I think we're definitely like, yeah,
a team that's taken a certain kind of like algorithmic product
development discipline, if that makes sense.
Sort of we're the most extreme and deterministic, right?
I think I've told you in a previous episode maybe about our tests where we like lock down
the networking layer and drive it with a random number generator so that we can simulate every possible permutation
of how packets pass each other.
Right.
We've been obsessed with determinism
to the degree of building a reliable,
algorithmic, traditional system.
And then along comes this technology that's like,
I'm not gonna do that at all.
It's inherently stochastic,
kind of inherently unpredictable.
And that's just been like an interesting edge, I think, for us to explore as a team.
We're traversing the learning curve.
I'm proud of us.
And yeah, I think the cool thing is we'll have something unique to offer because of
our deep understanding of the underlying tech.
So how do you test a genius golden level
golden retriever on Amazon?
Well, yeah, how do you test a system
that integrates as a fundamental component,
this like inherently stochastic process?
Yeah.
And we're like in the middle of traversing
that learning curve.
But I think for me, it was
this realization that like, at the end of the day, it's still kind of just a test, right? Like,
and so we, there were different approaches that were kind of operating on the product that a
bigger distance done by certain members of the team. But really to do it right, we got to get
right in there into the test suite, I think.
It's sort of the same mentality you're in when you're writing.
This is where I'm at at least.
And you start small, like you're starting with unit tests.
But the main difference is it's a test that doesn't necessarily pass or fail consistently.
It's like a flaky test, basically, an inherently flaky test.
And so we've got this, yeah, right now,
I think we'll ship something interesting,
like standalone or we should.
We wanna move this into a procedural macro in Rust,
but right now it's just a function that's just like eval
and run this test a hundred freaking times, right?
We run it kind of once up to a certain spot
and snapshot and then parallel, run it all.
And then we get a score, like how many of them passed,
how many of them failed.
It's sampling.
And there's a threshold, like if it's 80% passed,
then we're good or?
Yeah, yeah.
We're playing with like setting thresholds.
You can commit a threshold that would potentially
fail the build. We're being pretty conservative with those right now because failing builds
are annoying. And I think we're still traversing the learning curve on all of this, quite honestly.
But what we have is this new, yeah, a new experience where you can define tools,
either pull them in via MCP,
we've got a set of core tools that the agent can use.
And it's an environment that's more optimized
for just like having the standard conversation
with an agent where it's able to reach out and do things.
And so, you know, like getting that UI in place,
getting it nice, you know, the different,
the checkpointing for which the CRDT stuff
has been dope to have, you know,
the review experience for which the multi-buffers
that we've invested in are really great, right?
Like seeing everywhere that the agent did the editing
and kind of the overall like Chrome around the experience,
like we've got that in place
and then used it to start collecting data.
Having collected the data that it is driving into,
all right, how do we make sure that we can prompt this thing
and define the tool prompts and define the system prompt
in such a way that things are gonna be effective,
make sure our tools are really unit tested.
Like, I mean, all of that is like
traditional software engineering to a T
other than this like, other than the golden retriever.
Right.
Yeah.
Yeah.
It reminds me of a small way of Gary Bernhardt's
functional core imperative shell,
where it's like, and maybe it's the other way around,
where it's like, whatever it is, shell,
and then like, you know, golden retriever core.
Like you're wrapping all of it,
like you're like blackboxing this thing
that we don't know what it's gonna do,
but as long as we black box it right, you know,
and we prod it correctly,
then we feel like we're doing our job.
Yeah, yeah.
And so like this thing we were just working on
and like this, you know, it will be shipped
by the time we, this airs, right?
But we were working on earlier is, you know,
streaming edits out of the LLM.
And so like, you know, we're using tool calls, like,
but the problem with tool calls, at least with the three seven API as it stands is like, you know, we're using tool calls, like, but the problem with tool calls,
at least with the three seven API as it stands,
is like the key value pairs will stream,
but the values will not stream, right?
And so we had started with a little thing where it's like
old texts, blah, new text, blah,
like stream me these little old texts, new texts,
find, replace operations to give me them.
But the problem is, if I'm waiting on the value pair, then the user is just sitting there waiting
or whatever. And so breaking that down into the model performs a tool call, but that tool call
just like describes an edit and then looping back around to the model and saying, okay, hey,
you need to perform this edit. Will you go ahead and do that and just do that
like in your normal?
Don't do that as a tool call.
Just like stream that out to me, right?
So we're streaming that out with the old text, new text,
like XML.
Yeah, and then we hit this freaking issue
where like the old, the new text XML tag,
open tag matches with old text, you know?
And it like panics our, or it didn't panic our parser,
but the parser error, right?
And like, but it only happened, I don't know,
it happened like 40% of the time or something like that.
Then we went into the prompt and I started and it was like,
XML tags must be properly matched.
Yeah.
And we run the eval again.
It doesn't work.
It didn't even work.
Right. And then Antonio was like, no, no, no, Nathan, make it simple.
And I was like, okay, how do we do that?
And he said, I think it was sort of after, after saying open tag, you must say closed tag
before you open another tag.
After saying old text, you must say slash old text.
After new text, you must say slash new text.
It went from 60% to 95% on the eval.
And then we're like, okay, I guess we gotta go
into the parser and just make it tolerant
of this last 5% for a second.
Anyway, that's just like a new little world for me.
I'm used to just like having so much more control,
but I'm enjoying it.
It's like a fun challenge.
It's a new type of product development, I guess.
It's almost like network latency, you know?
So you can't control the network.
You have no control over whether the call goes through
or not.
I mean, you can with the, you know, the response code that comes back, but it's kind of like, it's can't control the network. You have no control over whether the call goes through or not, I mean you can with the response code
that comes back, but it's kind of like,
it's almost kind of like that,
you can't predict whether the network will be there or not
or how fast it will be.
So it's like, well.
It's a good comparison, but the prompting is the weird part.
Like you can't just tell the network, hey now,
when you send the packet,
you're literally like putting this little,
we're back to magic incantations, right?
Like deep down inside of Zed's agentic coding,
there's like something that Nathan and your partner,
your teammate came up with to like tell it how to
spit out the right thing.
It's so weird that it down in there.
And the cool thing is those rules and stuff,
because we're open source, it's like, they're all online.
Like we have like, you know, z.dev slash leaked prompts
is the URL I want to put it on.
I'm like, yeah, there's our leaked system prompts.
We leaked them ourselves because we're open source.
Yeah, get some journalists who don't know
what they're talking about to write about you.
You know, like, look at this.
Get some free press out of it.
These people leak their own system prompts.
What are they doing?
Well, it turns out either you leak them yourself
or someone else leaks them for you.
Those things seem like they are unable to be held.
What I will say about the networking thing is yes,
but it's even harder because we have simulated
random network latencies, right?
Like as I was describing, you can actually in Rust, like build your own custom scheduler
that you drive with a random number generator
and every single async part of your entire app
can be scheduled by that.
And you can randomize the order that things happen in.
That does not help you with what is one token distinction
difference gonna do to what the behavior
of this crazy LLM system.
Like it's fundamentally different.
I don't know.
We can always sample.
I could use a seeded random number generator
to sample off the logits on the back of the LLM,
but is that even meaningful?
You know what I mean?
The whole point is to be able to change the prompt
and that you can change the prompt
and get such a diversity of different outputs
is kind of the point of it, I guess.
Anyway. I've always felt like, especially web development,
but software development in general,
is a house of cards that is held together
with silly putty and duct tape.
But I feel like at this point,
we've completely lost the plot and we just got insane
and just like amped up the level of complexity,
obscurity, randomness.
So it's just, it's getting crazy.
I mean, I can't, I mean, I think it's always been true,
but maybe more true than ever that like,
I really don't think you can make it be a house of cards.
Like that was kind of a lot of the,
the premise of Zed in a lot of ways is like,
how about we don't build this one like a house of cards?
I mean, no software is perfect.
And there's plenty of things that like,
I would like to improve about Zed.
But a big part of our philosophy is like,
let's get the primitives correct and nice.
And let's use those primitives to compose bigger components.
And then when we realized that it's designed wrong,
not constantly, cause you have to ship, but to some extent, stop the line, fix it, and then keep moving forward.
Doing it right and also just having things tested to death and having rigor. because I think now we've got the golden retriever.
If your system's already not reliable to a T, good luck.
Now you're unleashing this random process in the middle of it.
I don't know.
Again, you can maybe find those things without rigorous design and good testing in a system
where it's less stochastic in nature,
but how are you really going to explore the surface area of that? So all the algorithmic
stuff I think has to be like dialed in. And then I think beyond that, then there's this new frontier
of like, okay, once that's all working immaculately, now what about this LLM? And how do we get that?
immaculately. Now what about this LLM? And how do we get that? But that can't be perfect ever I think. Like that can only be like making the score as high as it can be. So anyway,
I really don't want, I definitely don't want Zed held together with Bailing Wire. But I
think the opportunity, like the opportunity I'm writing a blog post about this is like,
what if we, there's so many times like in every system,
I'm sure you've been here because I have many times where you're like, it would be better if
we did this. It would be better if we changed the design, but I just can't justify it right now.
Like there's my, I have a timeline that's too tight and you make that decision time after time
after time until you end up kind of in a legacy code base honestly.
And I think the exciting promise of AI for me
is it's less about like how many billion lines of code
we generate a day or whatever as a human species,
but more like how many well-tuned,
not held together with bailing wire,
really high quality software systems
are we maintaining as the human race.
And when, you know, like I did this pull request
over the holidays, it was like 30,000 lines
and nearly killed me.
And then it inspired like something that I hope
we're gonna be shipping in a couple of weeks
doing, you know, an AI feature basically.
Like, but yeah, the, like, but yeah,
the, yeah, the effort involved that has traditionally
been involved in any code base that meets the real world
at scale is, has been too much, it's been too much
to keep it, to really keep things clean, you know, anyway.
No, I think that's a really good way of looking at it
because my entire software career, I think that's a really good way of looking at it because my entire software career,
I have been selling the idea of slowing down to go faster
and spending the extra time and the extra money
to design it right, to test it right,
to do all of the rigorous things that slow you down
and cost more money and it's a very hard sell.
However, what if the cost of that was approaching zero,
right?
What if the cost of your big rewrite or your big refactor
was like 15 minutes?
Like why wouldn't you just go fix it?
You don't have to talk to anybody, just do it.
I spent like two freaking, you know, two weeks on and off
and like a very intensive week,
kind of right around New Year's.
Oh my God. I pulled it all night around New Year's Day because I told myself like, I'm not going
to fuck, yeah, I'm not going to, oops, I'm not going to blow my time box on this, right?
Oh, we almost got it.
Yeah.
Yeah.
I told myself I'm not going to blow my time box on this.
And yeah, and I got it compiled in at least, right?
This like change to this, you know, core part of the GPOI API.
You know, it's just called like thousands and thousands
of times all over the system.
And it was like wrong, you know, it wasn't right.
But I hadn't realized it wasn't right
when we upgraded from GPOI one to GPOI two.
But it's like, okay, now that I've seen it though,
I have two choices.
I can accept it.
And it's in the code base forever.
Or I can fix it and like work my tail off fixing.
And it's cool now that I think like, you know,
and I think we're still getting there to be real.
We're still getting there. but like the promise of AI,
I think for me is having this third choice of like,
I fix it and the horrible part of fixing it
is actually fast.
Right.
And there's something that has happened with access
to all the things.
I'm not gonna name them all.
We all know what they're called,
is that the trivial yet mundane things we would do
are somehow made a little easier,
like an email response.
I did this thing, I was like,
I almost procrastinated on two different emails today,
and this is not coding,
this is just very simple, everyday things.
And I'm like, I have in my brain
a version of what I want to say
But I don't have
The right I don't have it framed. Well, like these this is the facts. Here's bullet points
Here's what they said. Here's what I want to say make it happen and then out it comes. I'm like, that's awesome
cutting-paste move along instead of
Procrastinating do my thing coming back an hour later, right? You know what I mean?
Like it's these little tiny things that we,
that are now solved so simply that we used to just not do because they were
like either arduous or just daunting or cognitive.
Time consuming, right? Yeah. Even, yeah, time consuming,
but even like from an, a cognitive overload standpoint, right, right. I get you on yeah, time consuming. But even like from a cognitive overload standpoint.
Right, right.
I get you on both fronts.
Yeah, I mean, that's it for me is like,
I would in one scenario too,
where I would like normally procrastinate,
I've now forced myself to say,
let me consult my
genius golden retriever
on acid to help me because this is there waiting willing and ready and provided it can do a mostly good job.
I massage it on the final end, but like it's done the 80%.
I've done the 20% and I've moved along.
You rinse repeat that across a simple email response to a code refactor, to a new
vibe coded idea, and you've now continued to move.
That's probably my biggest fear is that the world won't respect what we respect, which
is purity and greatness in software development.
And this rigor that you guys talk about and the stability in our software is that the
world almost wants more software faster,
and now, because that's progress,
and at some point we'll have maybe global legacy crap,
I guess, not even code at that point.
I'm kind of concerned about that a little bit.
Sludge.
Yeah, I think it's, I mean, I don't know.
To me it's like fine, like the more,
it's good to have more software on the whole and it's good to have
more people creating software that meets their needs on the whole and my sense and we'll see
because like I don't know I can't predict the future maybe the singularities tomorrow but like
my sense is that as a piece of software that maybe you started you know maybe it was your third vibe
code result like a church ten dollars worth of tokens on three different LLM providers or whatever.
And then vibe coded out the starting point.
And you're like, great.
I've now chanced upon an amazing idea.
You start iterating on it sooner or later, like for that,
like if it's meeting real world demand
and you're going to try to change it,
my sense is sooner or later,
you're gonna wanna understand the behavior of that system.
And then you're right back to the same place
we've always been in software, which is like the constraints
on what we can build with software is not,
you know, they're not analog constraints.
That's certainly true, right?
Like, and they're also kind of not like even digital
constraints in the sense
that sense that we could generate a ton of code now as
well. But the constraints are I think still about controlling
complexity, controlling the complexity of the system we're
building, understanding the implications of what we've
created. I don't know. I mean, at some point, if AGI comes and
I can like literally go to a web form and prompt it and say, write me a fast collaborative code editor that supports AI
really well and like out pops the binary. Right. And cool. I guess we don't need to
understand our software, but until then, right. Sadness, man. Sadness. Zed would be dead at that point.
It would be like a hobbyist tool, like a Ford, like a Ford bottle tea or something.
Yeah. So I'm working on their old muscle cars or something.
Yeah. I'm on the line of like of this idea.
Like when you're in sales, you always think it's a numbers game or anything.
Really? Like if you're trying to grow a startup, it's the numbers again.
The more opportunities you have, the more possibilities you get for yes
Obviously you get some notes too, but I feel like this is the ultimate
Numbers game increase er for everyone whereas you may have only had ten results or ten opportunities before
It's not that you have infinite, but you can iterate faster through
Ideation or things that may or may not be good ideas for the world, you know, much, much faster. You can test and experiment.
It's such a rate and fail sooner too. Like don't waste so much time or find the
thing that's actually got a thread to pull and you pull that thread and you do
what you've got to do to take that next step.
I feel like that's what's happening is like the ultimate numbers game increase
or for everybody. Yeah. But overall, like, I don't know,
my challenge to myself
and everybody on my team and everybody is like,
the bar should be higher now.
Like if we have this new technology
that can automate tedious BS,
then we should have fewer excuses.
In a world where I can generate tests,
okay, not writing tests,
not having a feedback loop on your software,
like, come on now, really?
You don't have time to like generate the tests, right?
So I think like the bar for all engineers needs to go up
in terms of the quality of what we're building, yeah.
Yeah, the trade off has always been the value.
Let's just take testing as one example of rigor.
Like the value of those tests divided by the cost
of those tests, right?
Like that's where you decide like,
should I do this or not?
Right, right.
And it's been a hard sell even beyond the engineers,
but like to the product owners or the boss or whoever,
that it's worth that trade off to them
because all they see is like, we want progress,
not toiling away. Sustainable progress.
Yeah, like not just like doggie paddling,
not going anywhere.
But as the cost approaches zero,
which is what's gonna happen
if we continue to make progress like this,
like the cost is going down towards zero,
it's not there yet.
But if it gets down near zero,
then the value to cost calculation is like,
it's a no brainer, like you don't really have any reason not to just add the test suite because you didn't have to spend the is like it's a no-brainer. Like you don't really have any reason
not to just add the test suite
because you didn't have to spend the time writing it.
Maybe you had a glance at it
and make sure it's not on acid and that's about it.
And in a world where you're spending,
I don't know, 50 cents worth of compute,
you know, choking on some error in your algorithmic code,
the cost is going up of getting it wrong as well, you know?
Mm.
I mean, not that I'm saying this by any means.
I'm saying it.
On the main role point.
But Cursor was founded in 2022.
And then I think they got some seed funding
and like officially came out early 23.
And they came out focused on AI, right? They sped through the process. Whereas you
were aware of and knew where some things were going, but you focused on the core editor.
Do you feel like, I mean, because when I go to cursor.com, for example, it says
the AI code editor. And when I go to Zed, it says the editor for what's next. It does mention AI, but it's the last two characters of your subtitle,
which is not even the main title.
It doesn't speak when I would say maybe.
A lesser known modern code editor might be suggesting.
Do you feel like you were late?
Do you feel like you miss something?
Do you feel like? I don't. Yeah feel like you missed something? Do you feel like?
I don't, yeah, I don't feel late because I think we're all early. Yeah, I mean, I definitely didn't
set out to build an AI code editor when I started Zed because I said I was to point. Yeah, exactly.
I set out to build the future of software development. Like, people laugh at me less when
I say that or whatever now because we've proven out more
of our ability to just, you know, get close.
But it's always been this vision of a fundamentally better experience for the developer.
Yeah, and then AI emerged into this world, right?
So we are hardcore, like, tool builders, editor people, for sure. And yeah, as this is all, I mean, I was just looking at
and admiring, I think I did a good job on it.
It was a long time ago that I wrote the code
that integrated with all that API endpoints.
And we've had all kinds of infrastructure in place
for experimenting and playing with the LLMs,
but we also just landed Git integration.
We're shipping a debugger.
So are we late? playing with the LLMs, but we also just landed Git integration. We're shipping a debugger.
So are we late?
I think if you view things at a snapshot in time at maybe this exact moment or up until we launch this next agentic editing experience,
you might say in some ways we're behind, but I think
the way you're going to measure progress is kind of a vector.
And ultimately, I want to measure progress from sort of where do we stand in five years.
So what I'm excited about is like, OK, having done it the hard way, owning the underlying tech, what can we do with that?
How do we build the future of software development?
How do we build for the present moment in technology
of what people are wanting
and what the possibilities are?
And I think the exciting thing
about owning our full stack, right?
It's getting close to 600,000 lines of Rust
that basically every member of our team,
you know, every line of that code was kind of,
almost all of it is written by somebody
that's still on staff with our team.
We understand the system deeply.
What does it mean to take?
Yeah, so I think we have the opportunity
to build the first AI native editor.
You can say you're an AI code editor,
but if you're adapting a code editor that
was developed for the pre-AI era by a much larger different team
at a different company, you know, my bet is that over time
that may hamstring you.
And so hopefully going forward,
we'll have the opportunity to leverage our, you know,
deep technical understanding to do more.
It has to feel good, like you said earlier,
to be able to use your primitives that you built up
as you add this, you add this major new functionality
to building on top of those things.
Let me ask you a similar question but a different one.
So Adam mentions cursor, I'll mention windsurf,
which honestly I had never even heard of windsurf.
Now, turns out that's a rename of Kodium,
which I had heard of and it has been around for a while.
So they rebranded it at some point,
I missed the boat on that.
In talks to be bought by OpenAI.
For a process.
I heard this rumor.
Yes, this is not yet announced.
The deal is not finalized yet and could change.
So they're in negotiations.
This is according to NBC News.
OpenAI advanced talks to acquire Windsor for approximately $3 billion. So my question is like, would you take that deal?
I don't think it's responsible.
It's crazy.
That's definitely crazy.
That's a lot of money.
I mean, to me, that deal is a sign that we're onto something. I'll put it that way.
That the actual technology that connects this revolutionary LLM technology to the end user
that covers the last mile, that puts the developer at the center of that experience is incredibly
valuable.
I've always believed that and I've always had this collaborative approach,
of we're going to integrate.
I think we still have a ways to go there, to be fair,
although the way our team works,
I think is unlike any I've ever been on using Zed,
being as collaborative as we are.
But when you now have a new kind of
collaborator that will literally respond immediately,
anytime you say anything,
sometimes with some more or less on acid things,
but often with exactly what you want.
Right.
Yeah, it becomes even more important, I think,
to have a higher fidelity,
faster tooling around
the management of change, coordination of change, collaborating basically in software.
So like, you know, I was envisioning, you know, my version of a multi-agent system that
I was dreaming of before I understood the power of LLMs, you know, when we were first
starting Zed was this notion of like, mob coding, where you have all these
live branches and people are off on their own live branch and
you're kind of pulling their changes in dynamically.
And like, and that was, that was a cool idea, but like getting all
those people in that particular configuration is kind of a
special thing to imagine.
But now there's there, like there are people, of a special thing to imagine.
But now there's there, like there are people, the people are there to collaborate.
The people are there, you're right.
Yeah, the agents are there and waiting
to do things for you.
So I'm excited to play that forward, I guess,
further even from where we're at now.
Yeah, 100%.
I think nobody really knows where that's gonna take us
or how that's going to actually manifest.
I think we do know that it's going to be,
at least in the short and medium term,
very expensive in terms of compute
to have me plus 50 agents doing stuff all the time.
Well, especially if they're not doing the right thing.
So, quite meaning that well,
to me seems incredibly valuable.
And so, in a world where you can kind of open a pull request and, you know, using a tool
that was essentially designed for like the kernel mailing list, emailing each other patches,
when your colleagues just pushed a pull request and like they went home for the day, it's
fine that you're kind of doing that in a web form, like not directly integrated into what the tool you're using.
But like, I think in a world where, you know,
it's an agent running off somewhere or whatever,
like you need to give it feedback now.
Like it just seems like all that tooling needs to level up.
And I've always been excited about, you know,
it kind of a self-serving narrative for me,
or maybe I'm drunk on my own wine here or whatever,
but I'm excited about the prospect of leveling up
how we manage change in the software world.
And I think there's a compelling impetus to do so now.
Do you think tools like Git become less relevant
in a world like that?
I mean, I think Git is here for a very, very, very long time.
So this is not to disparage Git,
and I don't know, I work at GitHub, I love Git.
So again, there's so many things tied to Git
and so many ways in which Git is deeply embedded
in what we're doing,
but I think there's an opportunity to augment Git, I think,
with finer-grained understanding of
what's going on, finer-grained tools. To me, that's the role of the editor, right?
It's more this vertically integrated, or you know, something that drops out of the
editor to become more universal over time or whatever, but the idea of a
vertically integrated, yeah, an authoring experience that treats the process of
change and distributed change as like a first class concern
by design.
Yeah, interesting.
Yeah.
Well friends, I'm here with Terrence Lee talking about
what's coming for the next generation of Heroku.
They're calling this next gen Fur. Terrence, one of the biggest moves for Fur in this next generation of Heroku. They're calling this next gen Fur.
Turns one of the biggest moves for Fur
in this next generation of Heroku.
It's being built on open standards and cloud native.
What can you share about this journey?
If you look at the last half a decade or so,
like there's been a lot that's changed in the industry.
A lot of the 12 factorisms that have been popularized
and are well accepted even outside the Ruby community are things that are,
think table stakes for building modern applications, right?
And so being able to take all those things
from kind of 10, 14 years ago,
being able to revisit and be like, okay,
we helped popularize a lot of these things.
We now don't need to be our own island of this stuff.
And it's just better to be part of the broader ecosystem.
Like you said, since Heroku's existence, there's been people who've been trying to rebuild
Hiroku.
I feel like there's a good Kelsey quote, when are we going to stop trying to rebuild Hiroku?
It's like people keep trying to build their own version of Hiroku internally at their
own company, let alone the public offerings out there.
I mean, I feel like Hiroku's been the gold standard.
Yeah, I mean, I think it's the gold standard because there's a thing that Roku's hit
this piece of magic around developer experience,
but giving you enough flexibility and power
to do what you need to do.
Okay, so part of Fur and this next generation of Roku
is adding support for.NET.
What can you share about that?
Why.NET and why now?
I think if you look at.NET over the last decade, it's changed a lot..NET is known for being this
Windows-only platform. You have WinForms, use it to build Windows stuff, double-IS,
and it's moved well beyond that over the last decade. You can build.NET on Linux, on Mac.
There's this whole cross-platform open source ecosystem and it's become this juggernaut
of an ecosystem around it.
And we've gotten this asked to support.NET for a long time.
It isn't a new ask.
And regardless of our support of it,
like people have been running.NET on Heroku
in production today.
There's been a mono build pack since the early days
when you couldn't run.NET on Linux.
And now with.NET Core, the fact that it's cross-platform,
there's a.NET Core build pack that people are using
to run their apps on Heroku.
The kind of shift now is to take it from that
to a first class citizen.
And so what that means for Heroku
is we have this languages team.
We're now staffing someone to basically live,
breathe and eat being a.NET person, right?
Someone from the community that we've plucked
to be this person to provide that day zero
support for the language and runtimes that you expect in, like we have for all of our
languages, right?
To answer your support and deal with all those things when you open support tickets on Heroku
and kind of all the documentation that you expect for having quality language support
in the platform.
In addition to that, one of the things that it means to be first class is that when we
are building out new features and things, it is now one of the languages
as part of this ecosystem that we're going to test and make sure run smoothly, right?
So you can get this kind of end-to-end experience. You can go to Dev Center, there's a.NET
icon to find all the.NET documentation, take your app, create a new Heroku app, run Git
Push Heroku main, and you're off to the races. So with the coming release of Fur and this next generation of Heroku,.NET is officially
a first class language on the platform, dedicated support, dedicated documentation, all the
things. If you haven't yet, go to heroku.com slash change all podcast and get excited about what's the come for Roku once again heroku.com
slash change log podcast
So last year we talked I think
where
the
AI stuff and Zed stood was,
you had this little sidebar that would come out
and you could plug open AI API into it.
And maybe like a couple other models.
I don't recall exactly.
I know that after our call, you helped me debug mine
because I couldn't get my API key to actually register.
You know, it was...
It was early.
You were baking it still.
Yeah, it was very early.
It worked once you helped.
You know, once the founder of the editor helped me,
it worked.
So first class support.
But it was very much like, here's maybe my file
or maybe even no context,
just chatting with it in the right hand sidebar, right?
Yep.
And talking about code with your agent.
Yeah, it was like,
the premise was like strap a text editor to an LLM
and hit go, you know?
And that was cool because it was configurable
and it's continuing to get more configurable.
I like how fast you guys pump out
when there's a new model that comes out,
whether it's open or proprietary,
like basically it's there
and you just select it from the dropdown
as long as you have your key in there, you know,
I'm talking with.
And that's a big part of our like philosophy, honestly,
is empower the developer.
If the developer wants to use Olamma or some other thing,
like again, that's a process that I'm not necessarily like,
again, am I bending over backwards to go cover
the last 2% of the market for you or whatever?
I don't know. I mean, we're prioritizing, but this overall mentality of like,
come on, come into the development environment and bring the tools that you want to bring.
We'll offer you some too, you know, because that's our duty, I think. But
not trying to be sharp elbowed about exactly what how you configure your experience is a big
thing for us.
But anyway, continue.
Yeah.
So I just was saying I appreciate that you're like that because as a person who likes choice,
it's great to just not have to change my editor when I want to try a new model, just change
the model and continue to have the experience that I already appreciated.
But I would like you to explain to me,
since I've just now gotten the beta,
when this show comes out,
it's probably all gonna be out there in public,
you know, deadlines, right?
Software deadlines, who knows?
But I haven't really played with it very much.
Our audience hasn't played with it at all.
We're not gonna screen share and go like,
you know, point by point through it,
like what the agentic coding experience is like.
So if you could just like broad strokes,
describe what has gone from that,
which is like an LLM and your sidebar talking to you,
to what is gonna come out next,
or even if you can go beyond next,
where you're taking it.
Can you describe what the new is like?
Yeah, so, God, I'm very immersed in the now.
So let's start there.
Sure.
I mean, first, just to talk about what we had.
I mean, what we had was sort of,
I needed to explore the technology
and needed a good developer tool for doing that.
And so that's very much what shaped the old experience
of just like, I'm editing prompts
and manipulating what the input is to this thing.
And I wanna feel and see exactly what it's seeing
in order to have an understanding
there. But going from that, kind of chewing on it from that side of like strapping an editor to an
LLM, it was very optimized for writing. Like you could move your cursor anywhere in that document,
right? But then like, wasn't necessarily optimized for, I just want to have a chat with my editor,
or I would have a chat with an agent inside my editor and have that agent go take actions on my behalf and do things.
It was more of a low level tool.
And so what we're really offering now is more of that full agentic experience where first and foremost, the panels optimized for sort of readability and clarity of that conversation
as sort of a conversation with a goal,
the goal of understanding and or editing code.
Whereas in the other world, it was very open ended,
raw text, you know, like the tool calling support
is integrated so the agent can actually reach out
and do things like grep your code base
or perform
streaming edits.
When that occurs, you're able to review the impact of what the agent did in a Zed multibuffer.
So the idea is you can kind of click down at the bottom and see, you know, there's like
a little diff of edits that have occurred so far in the thread.
You can review them.
That pulls them up in a multi-buffer.
Again, it's like, I think,
who knows, maybe great minds think alike.
Maybe some other products, I've seen some stuff like this.
But a Zed multi-buffer is a different experience,
I think, than just throwing a bunch of editors together.
We really model that in the product as a bunch of editors together. We really model that like in the product
as a single coherent virtual buffer.
It's as if that is one file that's curated together
out of content from these multiple different files.
And so that's a fairly deep core concept in the system
rather than a bunch of editors.
You could do multi-cursor edits across those.
Like, I don't know, there's a lot of just like
really nice UX I think that comes out of a multi-cursor edits across those. Like, I don't know, there's a lot of just like really nice UX
I think that comes out of a multi-buffer
really being abstracted in the system
as like Zed's buffer experience, period.
Yeah, I've noticed you've reused it in the Git stuff.
Whereas at first it was only in like search results
when I would see it and use it.
And I was, at first I was somewhat put off by it
if you recall our last call,
because as a Sublime Text user, it was just a little bit different than I was somewhat put off by it, if you recall our last call, because as a Sublime Text user,
it was just a little bit different than I was used to.
And I felt like Sublime Text was a little bit lighter
and faster, but now that I've realized
how to use multi-buffers,
the way I was using Sublime is kind of obsolete
because I can just make the changes there in the multi-buffer
versus like clicking out to the different files.
And that's cool.
And so I've grown to like multi-buffers quite a bit.
And now that I see it in the Git,
it's basically your Git diff or whatever staged
or not staged, you can do your staging and stuff
in this multi-buffer anyways.
I might've cut you off,
but I just thought of a place where you've used it,
which is like, that's cool.
I'm used to that now.
When I saw it in the Git world, I was like, oh, I get this.
Yeah, so like in the Git world,
like we have a single multi-buffer,
which has excerpts
of every location in your project that has changes.
And then the Git panel, which has your sort of file level status, actually acts as almost
like a table of contents for that multi-buffer, scrolling you through it to distinct locations.
Yeah, and so we, like you said, we use them in the search results, we use them in diagnostics,
for example, and I think there's cool opportunities there for some like, a batch level fixing of diagnostics,
which would have made my life in December a little less miserable.
But of course, there's also this review experience that I'm describing and like, we're going
to experiment with this, I think we're going to get it in for the launch.
So I'll just talk about it. Like this idea of,
as you're going back and forth with an agent and
it's reading files and proposing edits and doing its thing.
It's building up this multi-buffer for you to review those changes.
You can obviously put your cursor in that multi-buffer and we do
a great job whether you're in the multi-buffer or in the buffer itself, right?
Like the multi-buffer is kind of this composite entity,
but if you were to kind of hit Alt Enter
and travel into one of those excerpts
to the actual origin of that content,
you would be able to edit that obviously.
And we use the CRDT to do a really good job
like disentangling your edits
from the edits that the LLM has done.
So it makes it like a really good job like disentangling your edits from the edits that the LLM has done. So it
makes it like a really, you know, a feeling like, oh, I can kind of collaborate with the LLM here a
little bit. Like it's not single threaded, like I can let it do its thing and go do my thing. And
this all like makes sense. I don't see edits I did as it's, et cetera. But then the thought is,
okay, maybe you make a tweak to what the LLM did and then accept it or whatever.
Right now, the first version of it,
we're just hiding it when you're done.
Okay, you're done. But I have this idea of,
what if we keep it there?
Then there's this secondary step to dismiss.
Then what's happening is this like
agentic interaction is almost building up
this curated editable subset of your code base, like just
the parts of the code that are relevant to the conversation at hand. Otherwise, why would
they even be referenced in this review experience? So it's almost like this multi buffer. I don't
know, this is all just really new thinking for me, but like, could have enduring value
as like, you know, I love this analogy of missiles versus guns, you know, like in World
War Two, all the dogfights, I think almost all of them were probably done with
guns.
And now, you know, we have like the F-35 or whatever and we like, if there ever is a dogfight,
hopefully there won't be one.
Yeah, it's like a missile that destroys the thing like over the horizon or whatever.
And but then in Vietnam, there was like this like time where like where I guess it was sort of, there were some gun
battles and some missile battles.
Missiles weren't yet that reliable or whatever.
And so I think AI is kind of the missiles.
You've got this agentic process that you kick off, go get a coffee or whatever.
Then you come in, you've got your multi-buffer, you review it.
If it's great, then good, keep going.
If it's not, switch the guns. And that's
where Zed really shines because it's just like, you know, but I think it shines across the board,
but like having a great editing experience and having everything that might be relevant
based on what you just did kind of presented there in one spot for you to work with.
Yeah. A multi-buffer is whenever we have multiple files open and they're all being edited, right?
Like this is not just like a multi-line in a single file.
It's multi-files being edited once in the, in the buffer of the AI.
Is that right?
Yeah.
It's like a single virtual buffer that composes excerpts from multiple files in one scrollable
editor.
Like it's treated as an, as a virtual buffer, but it's not actually one buffer, it's composed,
it's composite.
Mm-hmm.
But it does present itself as a single buffer,
and as you are editing or looking at it or scrolling it,
it will show you where like different files start and stop,
but you can change them right there,
you don't have to go click through to that file
unless you want to, to make changes to those files.
And so yeah, it's just like, there's been interesting ways thus far in the history of
said to like populate one of these multi buffers. You could do a project wide search.
You could get the diagnostics, you know, where are their problems? Now we have the get diff.
But like a really interesting one for me is you can have a conversation with an agent.
And like that's so interesting that like it would leave behind as like an artifact of that,
this like potentially useful subset of the code
that's like ready to edit right there.
We'll see if it works and how that works.
Yeah.
Is there any value or reason to persist those somehow
and like be able to recall them as if they were?
Yeah, we haven't gotten that far, but maybe.
Yes.
And in order to do that, I mean,
I think the challenge starts to get in.
There's some things right now that are our CRDTs,
the conflict-free replicated data types,
which we've talked about in previous episodes, right?
But eventually consistent data structures
that track the edit history of your buffers in a fine-grained way.
Tracking at that level of granularity, I think, is required for presenting a really good
multi-buffer experience, and especially one that you'd like persists over time.
It's just that those right now are all in-memory structures. It's been a long time kind of to do, to continue that work and pull them out of in memory,
go further.
But that's kind of still all sort of in progress, I think,
where you have some like early design work
on how to get that stuff more persistent
kind of in the oven, early, early bake,
preheating the oven.
For that reason you triggered the easy bake oven in my head, but this does not sound like easy bake at all.
Early bake, but not easy bake.
Okay.
Can I share here in the moment some experience with this?
Like, I'm not paying attention, I'm listening,
I'm letting y'all talk, I'm paying attention.
You're doing it all. I'm playing, I'm letting y'all talk, I'm paying attention. You're doing it all now.
But I'm playing.
Oh, you're playing.
I'm playing with this agent.
Nice.
And it's the coolest thing ever.
I think it's just revolutionary magic.
Would never expect this to be possible ever, ever, ever.
But it is.
Put that on your homepage, Nathan.
Right?
I'm using Zed.
I am just in my simple adamsdkovac.com Jekyll website.
And I was like, OK, what's the best way to test this?
I open up the agent part of it, which I believe is Command R.
What is it?
How do you get to the agent?
Something like that.
I don't know.
It's down the bottom.
Command R will target the right bar. Command question to the agent? Something like that, I don't know. It's down the bottom. Command R will talk to the right bar,
but command question mark will like.
Command question mark, yes.
And you have to hold shift,
so that makes it a little awkward.
But the idea was like, it's easy to remember.
I have a question, like I wanna interact with you,
like help me out. I like that.
You know? I like that.
I learned something, I didn't know that, did that, okay.
I would actually, I got a different idea,
but don't worry about it.
I won't fork my thought process here.
So I'm here in a very common file
in a Jekyll project
that everyone has.
It's the config file and it's a YAML file.
And so that means you can just make this thing a mess
if you want to.
You can forget things, you can add spaces,
it could be jacked up, you never know it.
And I just said to this thing, I said,
are there clear improvements I can make to this file?
And I've been over here making improvements
Okay, nice. They've made a bunch of suggestions. I reviewed them all like these are amazing
It took me to a whole new branch it opened a markdown file and described the changes
It's making for me and the whole time. I feel like I'm just like just driving the I
Don't know how to describe what I'm doing here. I feel like I'm just like,
You're going for a ride.
I'm just here directing.
I know what the code should read.
I'm not writing the code.
It's writing.
I'm like, yeah, that looks good, cool.
Yeah, I like that.
You're collaborating in a way.
I think you kind of are.
It's like a collaboration.
It's very much like,
It's very much like that.
But nobody likes to be,
I mean, we built Zed because I don't want to collaborate on a screen share session
where I can't type.
I want to be engaged and engage participant.
I think it's my ethical duty to be an engaged participant,
depending on what software I'm working on.
So I'm not just like literally vomiting generated text
into the.
I mean, whatever, depending on the situation,
that may be totally reasonable.
But I like the idea of being an engaged participant. When I'm collaborating with a human, into the... I mean, whatever, depending on the situation, that may be totally reasonable.
But I like the idea of being an engaged participant. When I'm collaborating with a human, I'm always
more engaged when I have the opportunity to kind of grab the wheel. I can follow them
through what they're doing, but I also have the ability to intercede if I need to. And
so that's really, it's all about servicing great primitives for you to stay engaged while
getting leverage
from this thing helping you.
Yeah, it's a very much a conversational scenario.
I'm like, I'll just, I won't go into it
because it's not worth it,
but there's just some things that stood out to me
as part of this is that when I began the process,
I didn't pay attention to the state of my branch.
So I was in a, I had some work I was working on.
I think I was doing something with tailwind config
and I was like midway through it
and I haven't touched it in months
because it wasn't important, I was just playing around.
And so I think there was some like dirty code there
like there was files created
and they weren't committed to the branch,
but I'm in a branch.
Well, it took me to a whole new branch.
It took that uncommitted dirty code basically with it,
told me to stash it, told me to stash and help
me to stash it, which who remembers the syntax to stash? I don't ever. Or how to unstash
forget that. Like that's, that's the LLM job. True. Right. And so it stashes the code for
me. This is only after I'm like, I think we have some dirty code here. Like we couldn't
commit it. And then it helped me stash that code
commit the code it created and then go back to the original branch unstash that
stuff commit it and then now they're like in minutes like I would have I
clearly procrastinating on this change anyways and didn't even know I can make
improvements that's config file but here I am just in this podcast
doing it and it's very collaborative.
It's like talking to a buddy.
Okay, let's do this.
Okay, let's do that.
And it's asking me for permission.
It's saying allow, don't allow, allow once.
And because I have signed commits,
I have to use my fingerprint to sign the code.
So it's not like it can just commit
and sign the code for me.
I still need the Adam fingerprint in there.
I'm just like, I can't even believe this is impossible, man.
This is just I'm in the future right now.
It's weird how fast we get used to living in the future.
It's like so funny, huh?
Well, it is amazing.
Thank you for making this.
This is possible.
Yeah, it is cool. Well, I mean, okay.
I view our job is just to build a fabulous UI
to what is ultimately like not my creation,
is this like crazy technology of an LLM.
The idea that like,
hey, we're gonna train a freaking massive neural network
connected up a certain way on like trillions
of freaking tokens
and then synthesize data and God knows what yeah, and for I think I'm assuming you're using
Sonnet for this, right? Like Sonnet 3.7. Yeah. And so like I'm excited. What I'm happy about though
is like, yeah, just watching it, you know, from the course of 3.5 to 3.7, get better at using tools.
Yeah, just that.
I mean, ultimately, I want the user to have an amazing experience.
And so ultimately, we have to figure out how to make it magical.
But it really helps that the models are good at using tools and that we can build good
tools and give it to the model.
And it can use them and do useful things.
Like that is unreal.
But isn't it nice doing it at 120 frames a second?
Of course it is.
It is.
It's like, why wouldn't you want to be in first class?
You know?
Like isn't it nice up here?
First class?
Like yeah, of course it is.
Do you like that 120 frames per second?
That's just why when I was like,
these things are all like these cool new agentic things
and they're all VS Code forks.
It's like, I'm sorry, I just don't enjoy VS Code.
That's just me personally.
I feel like it's coach, that I can fly and coach.
So that's just me giving Zed more props.
But have you guys seen them?
I didn't bring up first class out of nowhere
because I was thinking about that Louis CK bit
where the guy on the airplane.
Oh, I love that one.
Have you seen that bit, Nathan?
I love that bit, man.
It's the best ever.
Yeah, it's-
Tell it, Jared, paraphrase it.
I'll paraphrase it.
It's on point because it's about how quickly
we take things for granted.
So basically Louis CK is on an airplane
and there's a guy sitting next to him
and the
the lady comes over the loudspeaker she's like, I'm sorry, the wi-fi will not be working on this flight or something and the guy's like, this is BS, you know, and Louis CK is like, his whole point
was you're taking, it's amazing what you take for granted something you didn't know existed 10
minutes ago and he's like, you're sitting in a chair,
like 35,000 feet in the air,
flying at 700 miles an hour.
And he calls him a non-contributing zero.
Anyways, you should go watch it.
It's hilarious.
Yeah. He's like, you could for nothing non-contributing zero.
It's weird how quickly we do.
We're just like, like I was complying on a recent episode.
I was just complaining about how bad Gemini's function was that it wrote for me.
And I was just, and then I had a song be like,
I'm just complaining about something that's like,
just deep dark magic that we have for free, you know?
I think the things that I wanna complain the loudest about
are the things that I have the most responsibility for
and or ability to have an impact on.
Yeah.
And then I want to phrase that in a positive way.
You can ask my team how effectively I do that.
All right.
Sometimes better than others, I think.
Cause I think we all get frustrated,
but it's because you're like trying to push the envelope.
I mean, that guy is trying to get on a wifi
just like wants to do his board deck
or whatever thing he can do.
Yeah, that's why it's so funny
because he feels entitled to something
he had nothing to do with any of the technologies
that came together, the amazing inventiveness
of the human race in order to put him in the sky,
flying and then connect him to the internet.
And he's just sitting there doing nothing.
But yeah, in a position to improve and create
is definitely the best place to critique, right?
Cause you actually can affect change there.
So you're in a good spot.
Ed and I just like to do it, you know, cause it's fun.
Even if we, and we can have the ear of the guy
who's driving the plane as you are.
Yeah, I still think about you guys telling me, you know,
how about doing 60 frames per second effects,
or 120 frames per second,
just really fluid effects and things of that nature
that we still have yet to do some of those.
I know, that's why I was getting that from the other part.
You're working on this AI stuff.
None of my sweet ideas are coming through yet,
but I also wanted to get support,
so I'm happy with get support.
I think eventually you'll get around to those
really cool effects and that's when you'll go super fast.
And we added remoting, I don't know if you've used that,
but like being able to, as a first class part of the product,
SSH to a remote machine and have the Git support.
Oh yeah, I haven't tried that yet.
This debugger that we're going into private beta with
will also work over the remote connection.
So, but it depends on what you're working on, I guess,
whether that's relevant, yeah.
Yeah, I haven't tried that out yet.
I did see, I do enjoy the weekly updates.
Is it like Wednesdays, I think?
Generally one time a week, there's like one big push
and then like a few will trickle out,
as I see obviously feedback and bugs.
You're paying attention to these weekly updates, Jared?
Well, yeah, because there's a little button
that says like click to restart Zed and update it, bam.
And then it's like, you're on a new version,
click a button, hit the release notes.
Might as well check it out.
Takes you into a buffer, tells you what's new.
I mean, it's just fun.
You know, you're like, oh.
It's just fun.
I mean, I've given the guy $0, you know?
I've given, Nathan, I've given you nothing.
And every week you're like dropping goodies in my lap.
You know what I'm saying?
You've given me a lot.
I mean, I think, I can't tell you the number of,
yeah, like times that people reference the podcasts
I've done with you guys over the years.
That's cool.
So don't underestimate your impact, I'd say.
Well, thank you.
What should we talk about next?
Yeah.
Good question. What are you thinking, Adam?
I'm just, I'm thinking now that I've like done this
and I see the workflow and the user experience,
like one, it's a really good user experience.
So this is new and I can't even tell it's new
cause it feels so polished.
And now granted my experiences with one YAML file and changes and some branching,
but I think that's pretty sophisticated to know, you know, if I've got un, you
know, uncommitted changes, even files that are not even committed to the repo,
that's they're not being tracked.
Like it's got all that and it's got that collaborative flow back and forth.
I feel like that's just, it feels smooth.
The always allow or allow process.
Cause like you can either click allow, which is allow once or always allow like, Hey, clearly
this golden retriever is on us and it's a good thing.
Okay.
Let it go.
Just always allow it.
Clearly it's good.
It's a good thing.
Okay. Always allow it. Clearly it's good. It's a good thing, okay. Always allow it.
But it's got this really good flow
and the UI and the way it's,
I'm gonna share some, I'm gonna share some,
I'm gonna pull some of this on for the show notes
or something like that, Jared, in the video or whatever,
but it's got, it's really, it's easy to read.
It's easy to read.
It's as if I'm sitting there with a buddy
and they're telling me what change they're done.
But I'm the direct creator.
It's a clear code.
It's a clear pairing where I'm pairing with an LLM clearly.
It's good.
That's not what I was gonna say though.
I just wanted to compliment you.
That was my compliment.
I was just saying whatever you wanna say
but that was my compliment.
I mean, I'm blown.
Yeah, I'm stoked to hear that you're having
such a good experience.
We're trying to make it good.
Yeah, we're trying.
We're all burning the candle at both ends,
trying to deliver a really fabulous experience.
And so it's great to hear that it's landing.
It's very good you are.
I mean, the streaming edits isn't even
in the build you have.
It'll be out, I'm pretty sure, by the time this airs.
But that's even nicer of just watching stuff stream,
having an experience that feels immediate and engaged.
That's the key goal.
Yeah.
One of the UXs I really enjoy, I don't know if this is similar,
but whenever you're working on a document with Claude,
for example, like inside the
actual Claude web app or their, you know, Mac app or whatever, you can see it create
a separate, so you have your chat kind of going on and you have the separate document
being created with whatever it is.
Artifact.
Yeah.
This artifact.
Sure.
Okay.
Artifact.
I like that.
And you can see it literally deleting lines and adding lines back in. I like the process because I can kind of see the process of the work exactly
Yeah, all that to say that was my comment. What I wanted to say was this was that now I'm thinking like maybe it is
Maybe you're in the best position because you have
the ultimate
Great foundation the ultimate control of the underlying code editor
for developers to collaborate on
because you have such precision, such speed,
such control over all the performance things
that developers truly care about.
I feel like you might be in the better position though
now to come out and you might be able to get some tailwind
back with that because, you know,
while Cursor is like the name brand of a code,
an AI code editor, I feel like maybe you're better positioned
because you have better control of the foundation
you're building on.
Yeah.
And again, it's like, I don't know,
if you would have told me that, you know,
people were forking VS code left and right
because they were so excited to have
direct control over an editor UI and that some of these people were being purportedly
valued at billions of dollars of evaluation.
Yeah.
So basically, if you would have just stripped the kind of comparative analysis of all of
it, when I started and said that was going to happen, sign me up.
I love that future, right? So I don't know. I always zoom out whenever I'm in doubt. My
goal has always been to kind of build the next generation in tech, like really advance
the state of the art for the developer experience, period, full stop.
For me, for a long time, once I tried it,
we created Electron, this atom shell, right?
And then we tried it, we gave it a good try.
And yeah, could we have done it better in web tech?
Probably.
But ultimately, like, we had to leave that piece of it
behind to deliver a truly exceptional experience.
Otherwise, we could never be better than the browser.
Right?
Like that put a ceiling on how good the experience can be.
And so for me, I was never gonna do it any other way.
And so anybody else's success,
it's not really relevant right now,
other than the degree that it indicates
that there's value where we're going, I guess.
So it's never really, it doesn't bother me, I guess.
I don't feel behind or ahead.
I feel right where we need to be
because we're doing what we're doing with integrity
and we're gonna do it as fast as we freaking can
with integrity.
And I think over time,
that's gonna prove to be the best approach,
but we'll all see.
Or you can see right now.
Anybody listening can see right now what they think.
Yeah.
How much are you thinking about brand new apps,
brand new developers?
A lot of these, I'm thinking V0, Bolt, even cursor.
Right.
We're taking our projects, we're opening up ZED
on an existing code base,
whether it's a Jekyll blog or an Elixir app
or it's Zed itself.
I'm sure you're pointing it at itself all the time.
And we're like editing,
and we're coding alongside this agent.
But a lot of people aren't,
they don't wanna do that.
They wanna like put their app idea into a thing
and get it built.
Right.
Are you thinking about them? I'm excited about them because I think that's, you know,
so many more software engineers being born
than I could have ever imagined before.
We might be for those people like maybe at step two,
if that makes sense, like when you're ready to go
just a little bit deeper.
We're not gonna be like, you know, Vim or something
where you have to really get in and worry about a terminal, right? Or a desktop app. you're ready to go just a little bit deeper. We're not gonna be like, you know, Vim or something
where you have to really get in
and worry about a terminal, right?
Where your desktop app, the goal is to be like approachable
and accessible and friendly.
But it's also true that like I'm building this tool for me.
You know, not only me, like I don't want people
to think I'm totally selfish, but like I still matter.
And so it needs to solve problems that are relevant to me.
And ironically enough, the problems that are relevant to me are problems relevant in a
580 something thousand line Rust code base.
It's a very serious situation.
And so yeah, I would love to have the biggest tent possible, but I do think that like we're
intended for software engineering.
You can vibe code inside, I think, right? All you
have to do is just never click review. Never look at it. Rather not look at what you're doing here.
If that is working for you, then go for it. But my sense is that like, you know, and maybe this,
the ceiling or whatever, like the point at which this will be required will change over time as
models get better. But my sense from my own experience, and people can
share their own, is that at some point you need to understand how the software you're
relying on works. And that code is a really important tool for understanding how software
works. That's why we created these formal languages in order to express ideas, first and foremost.
Right? We can read the assembly code, like the machine doesn't care.
It's the human beings that need to understand how the system behaves.
And I guess the LLMs as well.
So I think, yeah, that's where I'm at.
Like we're for software engineers that just want to get more leverage.
And I welcome new ones.
So imagine a software engineer who's an intermediate,
new to intermediate, and they launch Zed
and they think to themselves, I wanna web app.
I wanna write a web app that takes in some input
and does some stuff.
And so can I just launch Zed, like no open files,
file new, I don't know, new project or something.
And then like, open the agent at Coder and say,
can you build me a Rails app or, you know, like, or whatever.
That would be cool.
Yeah, I think in the build that you have today,
we sort of scope what the agent does,
like in the folder you're in.
So I guess maybe that's a rough edge.
So, but yeah, so create a folder, open it in Zed,
and then yes.
And then yes.
Well, how-
And maybe we can smooth that out.
How many tools do the agents have?
Like, can they call command line tools for me?
Can they call Rails new or something
and do all that kind of stuff?
Yeah.
Yeah, we execute, and we've experimented
with a couple different approaches,
but what we landed on now is we actually
Have the agent like run in your shell
The assumption being that like whatever shell you're using you probably configured with your Python virtual environment
I don't even pretend to understand that world. That's well-planned
I need to understand it better than I do people on our team have done have improved it a lot in that but like there's a lot of
language specific situation specific stuff in your shell.
So we just tell the LLM, like, here's the user shell.
We set it up to run and kind of that set up so that, you know,
whatever you've configured is there for it.
And then inside that shell, it can kind of, yeah,
it can run commands, it can do things. So, you know, in some sense, it can run commands. It can do things.
So in some sense, you can do whatever.
Then there's some dedicated tools.
But they're not that many, honestly.
And we'll be adding to it over time, I think.
But just the ability to do glob matching on paths, grep.
And we've done some work with Treesitter around grep to make sure that when we find a match,
we're giving the LLM a coherent piece of text that stops at reasonable syntactic boundaries,
et cetera, and then editing.
But then, of course, you can bring model context servers.
And so as part of this launch, we put a lot of work into just like making that easy.
So there's a couple of different ways.
The most universal way is just like in
Zed settings you can configure a model context server,
say where it's listening and it'll work.
For sort of a curated set that we hope to grow
and the community can help us grow.
There's also like in the Zed extension store,
just a section for model context servers,
basically context servers
that speak the model context protocol.
And that, we're not another registry,
I guess we are, we're kind of a meta registry
in the sense of like,
this is how you like install this model context server
and connect to it from Zed basically.
It's just like a little recipe basically
that pulls it from wherever.
And so that's like, and then, you know,
we put some work into making them easy to configure,
et cetera, like, okay, you've installed it.
Does it need off?
Does it need some other environment variables
to be set for it or whatever, streamlining that.
And so that's a way to kind of build your own tools
or bring in, you know, there's like a tool that will pull in
Postgres schemas and there's all, yeah.
That's all kinds of stuff.
There's all kinds of stuff.
There's a tool that will like puppeteer the web browser,
right, called puppeteer that like literally opens up
a browser and lets the LLM like navigate around.
We're landing image support so I can like take screenshots
and like, yeah.
So ultimately this should feel like, you know,
and we'll be getting there incrementally.
Like I really wanted to embrace extensibility
and make this, make Zed feel like this work bench
for experimenting and playing with different models,
different agents, different tools
and have them all coming together in this environment that feels really fluid and luxurious.
First class.
First class.
Yeah.
I like what Jared's original question was, which was like Rails new and stuff like that.
Cause that to me, I mean, having been through this,
it's not even out yet.
You've given Jared and I access for this conversation
version of this agentic flow.
Like it would be so cool to do exactly that and say, okay,
let's create a new Rails app today.
Here are the things that I want that are like common gems
or things you would install go and it does it.
How's this look?
Do you wanna make this a Git repo and start tracking it?
Kind of thing like that whole flow,
like I can see that happening here so easily.
And back to that original thing I mentioned before,
like an email workflow where I would normally
procrastinate over something.
I know Rails new is not that hard to do,
but new is always hard to do, right? You can sometimes, actually new is not that hard to do, but new is always hard to do, right?
You can sometimes, actually new is not that hard.
It's finishing this hard.
It's all hard.
It is hard.
Well, you pick the person,
there's various versions of this, this hard.
You get over that initial hump so fast
because once you have, they always say,
it's all about momentum, right?
If you want to go somewhere,
you got to generate the momentum to get there.
You can't be stagnant.
So motion creates emotion.
Let's create some motion.
Rails news already there, we're done.
I've described it.
Now we're vibe coding, baby.
That's right.
Now we're vibe coding.
Now we're collaborating.
But I think in that case, it's more, what'd you call it?
Is it still vibe coding?
Well, vibe coding literally is, do I look at the code or not? I mean, that's, I think in that case, it's more, would you call it, is it still vibe coding? Well, vibe coding literally is,
do I look at the code or not?
I mean, that's, I think,
I'm trying to distill it down,
like what makes it different.
And that really is the distinguishing factor
is I don't look at the code, then I'm vibe coding.
But if I'm looking at the code,
Then let's call it collab coding then.
Then I'm collab coding.
Right here today, collab coding.
I mean, I do think there's, I'm vibe coding.
I've vibe coded the dotted outline
shader code.
So like we didn't have in GPUI a dotted outline, you know, dashed outline, you know, CSS would
just be like whatever border dashed.
But with Z it's like a shader code that we're running.
We didn't support that.
So I threw our whatever 600 lines of metal shader code that we're running, we didn't support that. So I threw our whatever,
600 lines of metal shader code into a couple different thinking LLMs. And I just said,
help me with this. Like help me figure out how to make this outline dash. And I got it about
80% of the way there. And then there's this guy that is on our team, Michael, who has all this sort of computational geometry background.
I spent time working on Bezier curve stuff at Adobe, et cetera.
And I was like, okay, Michael,
I don't have time to understand
this freaking fragment shader code.
It's almost there.
And then he was able to kind of push it over the line.
And I'm sure I could have,
if I'd sat there for two days with it, honestly.
But, but yeah, but what the, you know, in this particular domain of writing
like signed distance field code, which like I wrote all the signed distance
field code and maybe like the core, like the basic stuff, but this is a really
hard one to do dash outlines, like that would have taken me a week, you know,
and to get even 80% and see like,
does this look kind of good?
Is this even kind of close to right?
Even gave me the idea of like, okay, cool.
Let me hand this off to Michael, you know?
So I don't know.
I say it's situation dependent.
Yeah, I'm not gonna like carefully examine the code of the shader anyway, before I run it, see what it looks like.
So I was vibe coding.
Yeah.
Well, friends, I'm here with a good friend of mine, David Shue, the founder and CEO of Retool.
So David, I know so many developers who use Retool to solve problems, but I'm curious.
Help me to understand the specific user, the particular developer who is just
loving Retool. Who's your ideal user?
Yeah, so for us, the ideal user of Retool is someone whose goal first and foremost is
to either deliver value to the business or to be effective. Where we candidly have a
little bit less success is with people
that are extremely opinionated about their tools. If for example you're like, hey, I
need to go use WebAssembly and if I'm not using WebAssembly, I'm quitting my job, you're
probably not the best ritual user, honestly. However, if you're like, hey, I see problems
in the business and I want to have an impact and I want to solve those problems, ritual
is right up your alley. And the reason for that is Retool allows you to have an impact so quickly.
You could go from an idea, you go from a meeting like, hey, you know, this is an app
that we need to literally having the app built at 30 minutes, which is super, super
impactful in the business. So I think that's the kind of partnership or that's the kind
of impact that we'd like to see with our customers.
You know, from my perspective, my thought is that, well, Retool is well known. Retool is somewhat even saturated. I know a lot of people who know Retool,
but you've said this before. What makes you think that Retool is not that well
known? Retool today is really quite well known amongst a certain crowd. Like I
think if you had a poll like Engineers in San Francisco or Engineers in Silicon
Valley, even, I think it'd probably get like a 50, 60, 70% recognition of Retool.
I think where you're less likely to have heard of Retool is if you're a random
developer at a random company in a random location like the Midwest, for example,
or like a developer in Argentina, for example, you're probably less likely.
And the reason is, I think we have a lot of really strong word of mouth from a lot
of Silicon Valley companies like the Brexas, Coinbase, Doordash, Stripes, etc. of the world. There's a lot of chat,
Airbnb is another customer, Nvidia is another customer. So there's a lot of chatter about
Retool in the Valley. But I think outside of the Valley, I think we're not as well done. And that's
one goal of ours to go change that. Well, friends, now you know what Retool is, you know who they are,
you're aware that Retool exists. And if you're trying to solve change that. Well, friends, now you know what retail is. You know who they are. You're aware that retail exists.
And if you're trying to solve problems for your company, you're in a meeting, as David
mentioned, and someone mentions something where a problem exists and you can easily
go and solve that problem in 30 minutes, an hour, or some margin of time that is basically
a nominal amount of time.
And you go and use Retail to solve that problem.
That's amazing.
Go to Retail.com and get started for free or book a demo.
It is too easy to use Retail and now you know.
So go and try it once again.
Retail.com.
I'm also sitting here now, my settings.json file, which is the settings file for the, well, it's actually the settings file for Zed.
All of that, yeah.
But I'm thinking, I'm thinking like, can I, I can pass this file as context to it and
just like make my settings better, right?
I know we need to integrate more of that.
Or how I can like make settings more better?
For example, I didn't have, this is cool, man.
This is so cool.
I didn't have a mode set.
I didn't have a light or dark theme.
I didn't have that in the settings.
And I know you've got dots on it.
I can just copy and paste it.
What's that?
Oh, that was my feature request a couple of years ago
that Nathan looked out.
Remember that was to have light and dark modes.
Yes.
Keep going anyways, I'm just bragging
that that got in there.
Oh, what I did was I just asked it.
I just passed the settings.json file as context
to this freaking golden retriever.
And I said, how do I dark mode this thing?
Cause it's not dark mode.
Boom.
Now it's dark mode.
My settings are JSON files, dark mode it just like that.
And that should be baked into the product.
I mean, you should be able to have a meta level
conversation with Zed about how it's configured.
And it ain't there yet,
but like it's right there for the taking for sure.
That brings me to like Zed AI.
Didn't you guys launch a model or something?
Or there was some sort of a, yeah, tell me about it
because I didn't understand it totally.
Yeah, so I mean, ZAI was like last fall
and that was basically like, okay,
we're doing a bunch of stuff with AI.
You know, and the two features that launched then
was this like very sort of hacker chic,
write optimized kind of low-level strap a text editor to an LLM
and interact with it directly, which like, you know,
some engineers at Anthropic freaking love that feature,
right, because they're thinking about things
in those terms.
Was that, and it was inline assist,
just the idea of I can highlight some code,
hit control enter and transform it can highlight some code, hit Control-Enter, and transform it
and generate some code in place.
And then all the infra of, hey, we're using this.
We want you to be able to use it.
And so giving everybody some kind of free compute
within Reason of, yeah, here it is.
You can use these tools.
Then I think it was February we launched Edit Prediction. Edit Prediction,
we built an open source model. Not really. We fine tuned an existing open source model
and kept it open source and open source the data set to just predict what the next edit
is, which, you know, that's a feature that's been out in copilot form or whatever for years now, but we needed it in set.
And some, I mean, we put some real love into how that feature was integrated.
It has this thing called subtle mode, which I really like, because one of my
problems with kind of that eager predictive editing is when I'm in flow
and it's suggesting something to me, it's sometimes annoying.
It's just distracting me from my flow,
depending on my mood, but often it's how I'm feeling.
But often I do, I might want it.
And so what we'll do in subtle mode,
which is not the default, is you can kind of just say,
I only want to see that a suggestion is available.
And then when I hold alt down, we preview the suggestion, and then I could hit tab to complete it. And then we always are in that mode when there's like a language server completion, right? Because like tab in the old days, right? When you only had completions from the language server, we need to be used to complete the LSP completion. And there's sort of things that I don't know, we're in this transitional period, right? Like maybe the ideal is this perfect unity of these two things.
But like at the moment, there's sort of this algorithmic source of suggestions, and then
there's the more model-based, like more creative version of it.
Right.
How do we multiplex these two things?
And so our decision was, when we're showing you language server completions, and there's one from the LLM, we'll show you both,
but we'll only show you that completion for the LLM exists.
And then when you hold Alt,
we hide the completions from the language server.
So you can actually see what the hell
the thing the LLM is suggesting.
Otherwise you have the completions
like overlapping it or whatever.
And then you can tab in, or if you let go of Alt, it pops back to how you had it, where you have the completions like overlapping it or whatever. And then you can tab in,
or if you let go of alt,
it pops back to how you had it,
where you have language server completions.
So it's just like, I don't know,
fairly standard feature at this point, I think,
just predicting the next edit.
But I think, again, even that has room
for like craftsmanship around it.
And then now this is the agentic editing.
So it's been an evolution, but yeah,
ZAI is still a thing.
Got you.
Yeah, so the edit predictions feature,
you actually lost me, I turned it off,
not necessarily because it was bad,
but because I, like you said,
it was just like two in my face
and there was times where I wasn't wanting it,
I was just like pausing or something or you know.
And I was just like, I'm too old school and controlling,
like just don't, just get out of my face.
I actually had turned it off.
And then when I saw that subtle mode came out,
I switched to that.
And that's a pretty good balance I would say.
That you've struck there.
So we were able to kind of keep you in the game
and maybe you don't use them as much as some developers.
I kind of came back into the game to a certain degree,
you know, but yeah exactly.
And I do think like on all of this, there's, we need to refine it.
I want edit predictions to predict better edits and be better.
So you know, like we're collecting, you can opt in to share your data if you're editing
an open source project with edit prediction.
We don't, we won't take your data from that because it's such
a vacuum cleaner if you're not, you know? And we're collecting all this data. We're going to make it
open. We haven't had time yet to really get on that, but that's kind of like a big initiative
for us in terms of what's next is just like use the data that people have, again, opted in to
sharing with us to get better. That's
a big part of this as well. Become more extensible. Like, I would rather have people that want to go
fork VS code, just like integrate with a nice API and Z that lets them achieve their thing they're
trying to achieve. It's like that's a big part of it. All of the dreams about CRDB,
which is the eventually consistent
keystroke level database,
I think are still screamingly relevant
to the things that I wanna do.
Interacting with multiple agents in parallel,
like, okay, well, how are you coordinating change
among all those things?
Like, yeah, solving kind of getting an agent
running in a container in the cloud,
making that a really clean experience.
Also just like, yeah, I was so fired up about it.
I wanted to increase the scope for our launch, but I, you know, conrad talk
reasoned into me by basically pairing on it with me for long enough to say,
like, don't blow out scope.
But just the study of taking a multi-buffer and like transforming it, like batch transformations,
simple things of that nature, like,
how do we get a little bit of intelligence a lot of places?
And yeah, but just also it doesn't have to be like bells
and whistles, it can just be dialing in,
making it more capable, making it make better decisions
more of the time.
Yeah.
Yeah.
So previously the business model was teams.
Is it still teams?
Is it agents?
Like how are you shifting now?
Cause I mean, the landscape has moved quite a bit
since then.
Yeah. I mean, my, I mean, the mission of that is to bit since then. Yeah, I mean, my...
I mean, the mission of that is to build the future of software development,
as I mentioned.
And the premise is that if you really do actually do that,
build the future of software development,
you'll have an opportunity to, in a non-annoying or coercive way,
sell developers services that integrate well with the fact that you've done,
you've built this incredible development environment
where they're like, hey, I'll happily pay for that.
One of the services we could sell developers
is AI related services, which is largely compute.
But I think over time, there'll be more indexing,
more things we're doing on the back end.
We're running the open source model for you, for edit predictions, right?
Like that costs money to run.
So I think, sure.
And we are actually going to be taking our, regardless of what our long-term plans are,
we're going to be taking revenue for the first time,
really real revenue with this launch,
because it's expensive to offer all this AI stuff.
So it's like, it's kind of like we have to charge for it.
But again, if you want to go bring your own API keys
or use some other thing,
or literally take our code and fork it
and do whatever you want,
like I'm not going to try to be sharp elbowed about that.
If that makes sense.
Like I want you to have control over your stack,
but I also want it to be convenient for you.
If you don't want to go mess with all that and you want kind of a stock experience,
like just let me do what I do with another of these AI editors,
put down a credit card, it's 20 bucks a month.
You know, the goal is to not, we're not trying to be cheaper or more expensive than anybody else. put down a credit card, it's 20 bucks a month.
The goal is to not, we're not trying to be cheaper or more expensive than anybody else,
just offer a reasonable service at a reasonable price.
But it's not like the entire premise
of all of Zed's business, no.
The entire business, the premise of Zed's business
is still like selling teams and individual services
that integrate with their dev flow.
I was going to say, this is like the first thing you've
been able to, I assume, been able to charge for.
Like you're going to launch this.
You're not going to make it free, right?
Like people will pay for this.
Yeah, there's going to be a trial.
But we're now at a point where, yeah, we're going to start
charging a subscription for it.
And again, it's an open source editor. So like you have a lot of freedom, but I think it's cool
to offer a service. This is the first kind of test of that, right? It's one test of that. We
can offer you a service. We're not going to like use our control of the platform to be like super
sharp elbowed about it, but like here it is. And, you know, our goal is to make it a great service, you know,
one that feels worth paying for.
So I'm excited that you get to make some money.
I mean, do you want to make me too?
I feel like, you know, you really do.
I really shouldn't make any money.
You shouldn't even tend to be profitable.
I want to make like this.
Yeah, because the only way any of this.
No, no, I want to make money.
It was a joke. It was a joke.
Yeah, I mean, but it's I'm not doing it for the money.
I like it again.
I've never done any of this.
It's never been about making money.
It's been about, you know, building the best code editor.
I don't know. It's like it's an end for it.
And so I mean, like, you know, you're a business,
you know, you're a took investment.
So you kind of do have to make some money at some point.
Exactly. Right.
And like that is in reality.
You can't just go
off into some ethereal realm and like build software and divorce
from the realities of scarcity in the world. Right. Like the
sign that we've built something valuable is that people will be
willing to pay for it. Right. Ultimately. But I'm very
patient, though. I've been working on this for a long time.
But yeah, all that said, I'm excited to take some real revenue, even though the margins
won't be that great because like AI is expensive.
I mean, it's compute intensive.
So the feature you just gave us for free will someday, very soon, be paid.
The first taste is always free, man.
The first taste is always free. You know, I wonder though, do you mind riffing on how you'll pay, how you'll charge for the
AI features?
Do you mind riffing on that?
Is that?
No, no, no.
Yeah, yeah.
Yeah.
What I'm thinking is that I'm getting kind of personally fatigued with the places I can
pay for AI additives.
So basically you take your existing tool set
and you say, okay, now they all have some version
of an AI additive, it costs more.
Which I'm fine with because it adds value, I get that.
But what if I'm not a daily developer
where if I'm charging monthly for this thing,
what if I only use it every once in a while,
like every couple months or something like that?
Is there a meter chart for it or something like that?
Does it have to be a monthly charge?
We're gonna have a free tier, I think,
so that if you're not using the ton, that should be fine.
I think, it's not great.
I don't remember exactly the details of exact numbers,
but the goal is to let people experience it
and not lose our shirts in the
process. That makes sense in terms of, you know, to do it
profitably, basically. But I think, you know, so if you're an
intensive user, then you'll probably have to pay. And if
you're an occasional user, hopefully you won't. And the
goal is to give people as much value as we can,
you know, over time, but like the stuff is.
I'm not trying to be cheaper in there. I'm just trying to like think through, cause I just feel a little fatigued on like where subscription live in my life.
We just had a podcast yesterday. I feel like this is like an extension of that,
is that like kind of everywhere I go,
I'm renting something or there's a service or an addition.
And I get it, those things are adding me value.
I've opted into it.
So I'm not like this freeloader trying to be a freeloader,
but at some point I've run out of cash
or resources to spend places.
And I gotta choose if I wanna eat tonight
or code up my real app with my collaborative code version
of Zed.
Right, yeah, so my answer for Zed, I mean, again, it's like,
if that's really an issue for you
and it matters more than convenience,
then go grab an API key or go fire up Olama.
Okay.
That's interesting.
I don't know about like,
if the tool calling is gonna be as good
with the Olama models and stuff.
Like, again, that's all going to improve over time.
I think the official models that we are the more
the frontier models that you pay by the SIP.
It's going to work better with right now.
Are you saying that if I have Olamas on the same dev system I'm working on,
I have a model running, I can API call from Zed to Olamas and letters.
You should see it in there even right now.
I don't even, I don't have Olamma on this machine.
I have it on a different machine.
So I have it.
Yeah, I use it.
I have a dedicated AI machine basically that.
So Zed should just like pick it up in my experience.
The challenge with Olamma is Olamma
just seems to be falling behind a little bit, you know?
And I'm already frustrated with the status quo
or with the frontier that like the slightly less frontier today just bothers me.
I think maybe five, 10 years from now that'll go away.
Perhaps or they'll just keep frontiering.
I'm not sure where this plateaus, you know.
Yeah, I mean, I'm just, the main message is like,
I'm not here to shake down developers
for a subscription on AI.
I'm here to make it convenient for them to access,
give them a lot of control. And like, in order to be convenient, we'm here to make it convenient for them to access, give them a lot of control.
And like, in order to be convenient, we got to charge for it because it's just not a,
you know, well, if not, you're right.
Compute budget, right?
If not your foot in the bill, right?
Like I know on slash AI, the AI page that, uh, that you mentioned, I don't know if that's
accurate to this conversation.
I think it is, but it's probably probably all gonna change when we launch, honestly,
and we should have gotten it to you, but yeah.
That's fine.
Well, is Anthropic still part of your,
I don't know if it's a partnership,
but it says Zed Industries and Anthropic
in the A new hosted service section.
Is any of that?
Yeah, I think we'll definitely have Anthropic.
We may have other providers as well, like at launch or soon thereafter.
I've heard good things about Gemini, so we're putting some effort there.
So our hope is to have the support for that.
But yeah, we're offering that via Anthropic right now.
I've had really great experience with their model, but I still think I have more to experiment with
with Google's as well from what I've heard.
Is there ever a point where getting into that game
becomes worth it for you?
Oh, model development?
Yeah, because now your margins go up, right?
You can fine tune and customize and make it maybe better.
I love fine tuning. I love, I don't know, for me, who knows? Right? You can fine-tune and customize and make it maybe better.
I love fine-tuning.
I love, I don't know, for me, who knows?
I mean, I love technology and I love a challenge and who knows what the future will bring.
But what I will say is like, seems like a pretty saturated area of the market.
People competing in this model creation space.
Yeah. Seems very capital intensive.
Overall, I'd rather just focus on being a great tool,
like focus more on the developer and putting the developer
in a great connection to whoever has the best model,
which may vary opinion-wise from developer
to developer even, right?
Yes.
So that's kind of where my head's at.
I guess, you know, we'll see as time goes on
what makes sense.
I think that's a fair assessment today.
I think that as these get commoditized,
that price just goes down and it gets to a point
where maybe it becomes feasible for a smaller org
like yourself to actually take one on
and then have ownership of that aspect of it.
And be able to- I mean, one thing about me is like,
I like to take ownership, right?
Like in a certain level, like that's what ZED's all about.
It's like, we're not gonna build on a browser.
We're gonna take ownership of the underlying foundation
because if we don't, it won't be a great experience.
And so to the degree that we need to take ownership
to deliver a great experience, you know,
we're gonna keep doing that. It's just experience, you know, we're going to keep doing that.
It's just like, you know, as a growing team, but still small,
you know, we're trying to pick our battles of like, what is our highest leverage
right now to deliver a great experience?
And so far, it's more about like leveraging our CRDT based text editor
primitives and leveraging our control of the graphics stack
and multi-buffers, blah, blah, blah.
Like, let's cover that side of it
and make that freaking fantastic.
Right.
And then if we have to keep going, then we will.
Well said, Adam.
Anything left on the table that you wanna pick up
and throw at Nathan before we...
I think, would you mind revisiting?
The business model a little bit. I think we kind of been there a little bit
but I think one thing you said before is our focus and you
you kind of
Stanford temporarily like what it is in terms of like enterprise and teams is it and I don't mean that negatively
I really don't because I know that you're moving so fast and you can only hold so many things in
context. Right. Right. You know, you are a human after all.
Thank you.
Go to the grace there.
That's a genius. I'm saying you're not a golden retriever,
but my hair is getting long,
but how confident do you feel in the business model you're trying to get to?
Like, do you feel that is truly still the business model where you're, you're, you're doing all the
things you're doing because you have a passion for creating the next revolutionary thing
for developers. This editor is it all that good stuff. But do you feel like enterprises
and teams is the way where you get to where you need to be as a business?
I mean, I can't guarantee I have no way of perfectly predicting the future,
but I still feel really strongly about that.
That it's this, of course includes AI, right?
And ultimately like we're still focusing
on building a great experience, mostly for individuals
and also individuals interacting with AI.
We haven't really invested heavily
into the bigger vision in a while.
A lot is there, which we use every day. But I envision this kind of vertically integrated,
multi-human, multi-agent collaborative environment, where you don't need to leave the editor to
do a lot of the types of things you do when
you're interacting with your team.
That conversations, whether they're with humans or AI about code should be happening in the
place where you're writing your code or watching the assistant write your code, one way or
another.
There's opportunity for this vertical integration of a really great software
dev experience. And yeah, I still want to kind of book, you know, like professional teams and
also companies, like, you have sell that experience to them. I guess I'm stammering
a bit because it's like, I don't think so. I think we're in the motion. I guess I'm stammering a bit, because it's like, it's still a vision.
We're in the motion.
I mean, and again, I didn't really mean to use that word
in a pejorative manner, but I kind of did.
Oh, it's fine.
I mean, I didn't take it that way.
I didn't mean any offense.
I just like talking out loud about this,
because I feel like I'm not steeped in Z said like you are, I'm not thinking about your vision
on the daily, you know, and you are.
And I'm just thinking this feature that you've just
that you're about to release, that you have released as of this podcast release.
Is the cracked door open
into the enterprises, into the teams,
because those teams and those enterprises for sure
are being told, allowed to, or maybe even demanded to use AI
in ways to propel the business forward.
And thus far you haven't had a great solution
like it is now, it's amazing.
I feel like you're there. Like this is the
Thank you.
the open door for you. It's a great editor, but it was missing some modern components.
Yeah, exactly. And you know, the funny thing is that's always been true, right? It's always been a great editor about missing a few things, you know, since since it was a little baby and I couldn't even edit itself in it, and then I edited some markdown in it because it didn't have syntax highlighting.
But I remember the first time I edited actual freaking Rust code in it,
and then I remember when I got some diagnostic showing up.
It's all been an evolution of filling in this pie.
The pie has gotten bigger really fast
with this new tech.
But it feels like a real opportunity.
But I'm glad you feel that way.
I think I'm very hopeful for you.
I think, especially as I took a moment
to play with it during the call, which I'm glad you didn't mind
me being silent for a moment or two just to get sort of enamored
by this revolutionary device that you just
think you've just given me to play with.
So cool.
I mean, I'm really excited for this release for you
and this next step,
because I feel like it's the unlock.
It's the unlock for you.
Yeah, I have a sense that that could be true.
We'll see what happens.
All right.
I'm excited for it too.
Great note to end on.
Where should folks go?
If they're hearing this podcast,
they listen the whole way and they're like,
okay, finally, where in the world do I go to learn more?
What is the best URL you can give us?
Well, z.dev is the simplest one
and there'll be a banner right at the top
pointing to a GenTech.
Okay.
Our audience is smart.
At this point, if they're still listening,
they know how to get there.
They'll find it. They're already there? Oh yeah, they're all over it. You think so,. Our audience is smart. At this point, if they're still listening, they know how to get there. They'll find it.
They're already there?
Oh yeah, they're all over it.
You think so, Jared?
I think so.
Z.dev slash agentic.
He just picked a URL live on the show.
I did.
I did.
And I hope I don't have to make them change it,
but it's a simple enough URL.
Yeah, z.dev slash agentic.
And if you wanna see our prompts,
it's z.dev slash leaked-prompts.
Oh man.
Z.dev slash leaked-prompts.
I'm gonna put that in changelog news as if it's.
Oh my gosh.
Can you believe this?
Z, linked-prompts, you know?
Let's do it.
It's gonna be huge.
Do it.
I gotta do it before this show goes out,
otherwise, I'll be able to get the joke.
I really think, yeah, it's gonna, it's gonna make sure the problems are there
When when are the problems gonna be there Nathan? They'll be there by May 7th. Okay, perfect. Don't be there. All right, man
That's cool. I like the live makeup a URL and the reaction when I asked you where to go
Priceless just just priceless. Let's not edit that out. Jason leave that in there. That's that's good. That's good. Let's leave it in there
Just priceless. Let's not edit that out, Jason. Leave that in there. That's good. That's good. Let's leave it in there.
It just shows you, I think it just shows you the raw nature of what it takes to innovate.
I mean, you can't have all the questions answered. You know, and you're not an idiot for not having that answer. I mean, there's nothing wrong with you for not having that plan. You're moving at the speed of innovation.
I mean, like, we can't expect you to have it all together.
And that's why we're here.
We're here to help you.
But yeah, I don't know.
It's a very nonlinear experience with me.
I spike on certain dimensions of this vector more than others.
And being like always prepped with the right URL before the podcast or whatever
isn't always my strong suit.
What I lack in any other dimension I have in like,
I really care.
I'll put it that way.
I really care about delivering a good experience.
So I hope people have a good one
and we're gonna keep making it better.
Well said.
Zed is not dead, alive forever.
Now with AI,
zed.dev slash agentic and also slash leaked dashed props.
Have fun.
Hell yeah.
Have fun.
Have fun.
Thanks, Nathan.
It's been awesome.
Always be catching up with you.
Anything else?
Anything left unsaid that we haven't asked you that we can make sure we include before
we close out?
Probably, but I don't remember it.
So it's okay.
We'll get you next time.
Just keep it in the flow.
Good deal.
Stay cool, man.
Thank you. Thank you.
Thank you.
Thanks, guys.
Well, I don't know about you, but I'm excited because live on the air during this podcast,
I fixed some stuff I don't even know I can fix.
I know I had issues in my YAML file for my Jekyll blog. But hey, Zedd's agentic agent helped me out
step by step, committing code, changing branches, stashing code, all the things.
It was fun and it did it good. And I was excited. I merged that code when I was all done.
And I am just so happy I could do it while podcasting. That's how cool it is.
But the URL to go to is z.dev slash agentic.
Check it out. Try it out and let us know if you're impressed. A big thank you to our friends over at
depo depo.dev. Of course, our friends at her Roku, her roku.com slash change all podcast,
the new Roku. So exciting. And of course, our friends over at retail, retail Retool.com and to the mysterious Breakmaster Cylinder, those beats are banging.
We thank you for those beats, Breakmaster.
Those beats are the best beats in the biz.
Thank you so much.
That's it.
This show's done.
We'll see you on Friday. Thanks for watching!