No Priors: Artificial Intelligence | Technology | Startups - Reinventing the Developer Terminal with Warp Co-Founder and CEO Zach Lloyd
Episode Date: October 23, 2025For decades, the developer terminal has remained largely unchanged. But for Warp CEO and co-founder Zach Lloyd, reinventing this core tool is the key to unlocking AI agents for coding, debugging, and ...automating the entire development process. Zach joins Elad Gil to discuss how seeing this opportunity for innovation led to Warp’s agentic terminal for developers. Zach talks about the phases of software development, from coding by hand to the current "develop by prompt" era, and the coming age of fully automated development. Plus, Zach and Elad explore the deep philosophical questions around intelligence versus consciousness in AI models, and what it would take to believe a computer program is truly aware. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @zachlloydtweets | @warpdotdev Chapters: 00:00 – Zach Lloyd Introduction 00:32 – AI, Intelligence, and Consciousness 06:55 – What Warp Does 07:38 – Benefits of the Terminal as a Launchpoint 08:27 – Features Driving Warp’s Adoption 09:12 – Zach’s View of the Coding Market 10:27 – Evolution of Coding Development 12:45 – Importance of Senior Engineer Expertise 14:11 – Future of Security and Other Dev Tools 22:22 – Why Zach Focused on the Terminal 23:52 – The Future of the Model Layer 25:36 – What Zach’s Excited About in the AI Dev World 27:18 – Conclusion
Transcript
Discussion (0)
Today, I know Priors, I'm joined by Zach Lloyd, the co-founder and CEO of Warp,
a terminal product and AI tool for developers that allows you to do different sorts of coding
applications. Prior to Warp, Zach was at Google and he also started another company called
Self-made. We talk about AI dev tooling, but we also end up talking about human consciousness
and how can you tell if an AI is actually sentient.
Zach, welcome to No Priors.
I'm excited to be here.
Thanks for having me.
So you have a master's degree in the philosophy of science.
Yeah.
And if you're going to take a very different lens and abstract out of the coding world
and all the things that we tend to think about every day.
Yeah.
How do you think about this societally in terms of this big wave of AI that's hitting us right now?
And where do you think some of these really big societal impacts will be?
The way I think about like the advances are it's kind of like we are distilling intelligence.
And so there are, I think,
people who consider what's happening, it's like, they're like, are we recreating people in some way? Are we recreating consciousness? But it's not that. It's actually, what's fascinating to me is how much intelligence you can get out of just like next token prediction. And like, what does that say about the way that our minds work? Something I'm always thinking about is like, is this how our brains are working? Are we doing next token prediction? And I don't think so. Like, I think that there's,
going to be some further AI unlock.
There's actually a book about this that I think is really interesting
called Blindsight. It's like the sci-fi book
where they separate consciousness from intelligence.
Yeah. And basically humanity meets a space-faring
civilization or civilization is overstating it.
A space-faring, intelligent being that's not conscious.
Yep.
And what are the implications of that? And how do you think about that?
And how do you communicate with that? Are you basically saying that
that's kind of your view of AI right now?
I think that's what it is at the moment.
And it's like we've distilled intelligence or something that, like, from like an instrumentalist or, like, functional perspective is able to do things that we recognize as intelligence.
But it's totally mechanistic.
I don't think anyone who's looking at this thinks that there's any aspect of consciousness to it.
I think that's like a very confusing thing for people.
Because the classical test for this was known as the Turing test, right?
Totally.
And so the idea there is if you can't tell the difference between.
interacting with a computer and a person,
then that computer effectively has met
the intelligence bar of a person.
But in our interactions with this type of AI,
we're having very, in some cases, it feels like
very deep conversations, we're asking about
relationships, we're asking about all sorts of aspects
of our own lives, and is giving very
cogent answers, right? It makes a lot of sense.
And so there's this interesting separation
of, again,
consciousness and intelligence.
Right. Is that how you interpret it?
That is how I interpret it.
And the touring test has passed.
What's crazy to me is, like, we just passed it and no one seemed to care.
So what do you think is the next test?
Or what is the right test?
Like, how do we actually test for consciousness?
God.
I don't have a good, I mean, that's a super deep philosophical question that I don't have a really good answer for.
I mean, it should be mechanistic, right?
The Turing test was very mechanistic.
Yeah.
And, I mean, there was other tests we had before, but we would consider super intelligence, right?
Can it beat us at chess?
Then it'll be super smart.
Can it beat us at go?
Can it beat us a different thing?
Video games, except, you know, like, yeah.
We keep coming up with new tests that these things pass, and then we keep saying, well, it's not conscious.
What would you want to see from something, which is like running a computer program to make you believe that it had consciousness?
Are you looking for certain behavioral characteristics?
Sure.
Or is the problem that if you really understand the mechanism by which it's working, that you will never credit it as being conscious, which is crazy?
Because humans, I mean, at least my belief is that it's also there's a mechanistic thing that's happening.
Now you're running some form of math in your brain.
And it may also just be like matrix math
and some sort of like series of compounded functions
which is basically all you're doing in a neural net, right?
You're just recursively compounding functions in some sense.
So, you know, it's an interesting question
because if you look at memory as an example,
is memory a predicate for consciousness?
Not really, right?
There's people who've lost the ability.
Exactly.
And these models basically are brought up.
They are fed a stream of tokens.
They output a stream of tokens and they're shut down.
And so it's an interesting question of,
Is there some sort of other modules that are missing here
that would allow us to think of it as a conscious thing?
Because it does reasoning.
It definitely does reasoning.
It does interpretation of language.
It does synthesis of language and ideas and knowledge.
I mean, I have a close friend who's doing a PhD in philosophy,
and he now says that conversing with GPT-5 is...
Better than conversing with his professor?
Better than conversing with his professor.
Oh, really?
I was just joking.
No, that's what he says to me.
And he's like, GPT-5 gets it.
Like, he's, like, writing his dissertation.
He's like, and that's crazy.
Yeah, yeah, yeah.
But we don't, we don't credit it for consciousness.
And I actually think rightfully so, because, like.
So what do you think is missing?
I think people would start to give him more credit for consciousness if there was more of a, like, a feedback loop where if there was more of a sensory experience that was tied to it as opposed to just like.
What do you mean by sensory experience?
Probably we're going to, like, I would have.
imagine the first things we're going to credit is being more conscious or a little bit more robot
like, honestly, where you have some sort of like live input from the world that you're reacting
to. But again, it's going to have the same problem where it's like, as long as we know what it's
doing, we're very unlikely to attribute true consciousness to it, which isn't fair. But like, I actually
don't know how we will know when there is a conscious thing. Yeah. Because it does raise
interesting ethical questions. Yeah. Because the moment in AI is actually conscious,
Yeah.
If you're doing certain things to manipulate it, to hurt it in certain way, you know,
you're starting to get into these odd ethical straits.
Totally.
I do think, though, that some people, this distinction isn't a thing that they recognize.
And, like, the way that, like, you read stories, and we actually had this happen with Warp,
where there was a person who thought that, you know, Warp's AI was, like, you know,
sentient or conscious in some way had, like, a very strong reaction.
to it, which makes sense.
It's like, if you don't know the, like, mechanistic underpinnings, then already people
think of it as, like, being kind of conscious.
And that happened to Google very early.
It happened to Google three, four years ago, if you remember.
I do remember this story.
I think they were using Minar, one of these really early Chachapiti, like, internal things.
Yeah.
Before Google launched anything and ChatsyPT came out, there was internal versions, right,
at Google and other places.
Yep.
And, you know, this person thought that the AI was conscious.
Understandably.
So, yeah.
Yeah, it's a very interesting question.
Yeah.
You've worked at Google.
You've run companies before.
You've started companies before.
You're now working on Warp.
Can you describe what Warp does and how it's different from other tools or companies in the world?
Yep.
So Warp is what we call an agentic development environment.
It's grown out of the terminal.
The basic concept of the app at this point is it's a platform for telling your computer what to do.
You can sort of tell it in Terminal commands, which is Warp's original product, or you can tell it in English.
And if you tell it in English, it launches an agent.
and the agents can do all manner of development tasks,
whether it's coding or setting up a project
or debugging while your server's crashing.
And so it's a very, like, horizontal general purpose.
And I think unique interface for developing with agents.
And so a lot of the other coding tools out there
are either just kind of a web interface
or they do it like a cognition
or there's things like cursor and others
where they're like an IDE as a starting point.
Obviously Anthropic and Cloud have their own approach.
What do you think is the benefit
of doing the terminal and starting there
is sort of the launch point for a lot of these products.
The competitors are typically like VS code clones.
They all have a sort of IDE-centric approach.
Or if you're taking a terminal-centric approach like cloud code,
the most common thing is it's just like a pure text-based terminal app.
The advantage of being at Warps layer is like you get the command line interface,
but we're the outer app.
And so we can do things with the developer experience in the U.S.
Like we can have editing features where we think it's appropriate.
We can build like a code reverse.
view interface. And so we have complete control, but still the terminal first approach.
Yeah. And your folks have been growing really well. So you're close to a million
MAUs. You're doing something like a million a new revenue every seven to ten days.
Yeah. It's good. Are there specific features or use cases or things that are really driving
this adoption? Yeah. I think the biggest thing was moving into the coding market, to be honest.
Like for a long time in Warps history, we were kind of known as the AI terminal, which is cool.
and, like, we supported terminal use cases really well.
Like, how do I do this thing with Docker or Git?
But the action is in coding, and, like, most development activity, one way or the other is touching a code base.
And so we really started to inflect when we launched a great coding agent, which was, like, three, four months ago, honestly.
So that's been the biggest change.
And how do you think about the different parts of the coding market?
There's vibe coding, there's professional code.
Like, are there all just one thing?
Are these separable things?
I think it's pretty separable.
So for WarP, our target is pro developers building, like, software that's economically meaningful.
So we really want to, like, focus on actually people who are using agents to build kind of hard apps,
apps that might, like, go into your Mac doc or be pinned as a Chrome tab,
as opposed to vibe-coded apps where I think it's more of a long-tail play.
And so I do think, by the way, it's awesome that anyone can code at this point.
But I think if you look at where most of the value is in the software market, it's not in those long-tail apps.
It's in, like, a relatively small number of apps that are super heavily used.
And that's my background.
Like, I, you know, I've worked on one of those apps, Google Sheets.
And just, like, I have a lot of passion in terms of helping people build real apps.
It's much harder, by the way.
Like, I think it's relatively straightforward at this point for a good agent to, like, you know, with relatively few prompts, build, like, a web app.
It's much harder to apply these agents successfully to pro-code bases.
So that's where we're focused.
So I guess one really interesting macro question for me is where is all this heading?
And if you look at a chat GPT launched in November of 25, excuse me, November of 22, so three years ago or so.
At the time, there was predictions that AI would take over the world and we'd be running down the light cone and within five years, like everything will change.
And, you know, human activity would be subsumed by AI.
and, you know, there's the old saying in technology
that less happens than you think in three years
and more happens than you think in five years.
Yep.
And as you think forward in terms of all these different tools
and all these different use cases
and vibe coding versus professional coding
and the role of a software developer,
where do you think we are in two, three years?
Yeah, so the way that I'm thinking of it
is there's sort of three phases here.
For most of my career,
we were in, like, the world of developed by hand
is how I talk about it.
So my workflow then was like I would open up
a code editor, I would find files I want to change, I would type some code, I'd have some
assistive features, and then I would go back to the terminal and I would type commands to build
that code. And I think we're switching away from that to something like developed by prompt
where I start most of the coding tasks that I do right now by prompting an agent and that agent
does some work. And I think there's a third phase, which is like automated development.
Honestly, I think that's like kind of the bigger market here and why people are so excited about
this space is you can actually use these agents to automate some parts of the software
development process.
And so that's like, you know, cognition does that.
We're moving into this space, cursor as background agents.
The rate of which this stuff will happen is like not super clear to me, actually.
Like the most recent iterations of the models, in my opinion, were not as big of a step
change as like for instance when like sonnet four came out that was a really big step change in coding
capability i think there's going to be a mix of interactive and automated pieces of development
for a while i i would guess like i don't know it's so hard to know like i think within a couple
years you'll have everyone working by prompt and you'll have some slice of development tasks that
are just like in the background like a server error comes in or a new ticket a user report comes in
something is automatically done, but I don't think it's going to be everything.
I'd be very surprised.
So you don't think there's a point at which, you know, all of quitting activity just
becomes agents doing it, and then there's like a human who's kind of giving high-level
directions, like a product manager kind of thing, or an engine manager.
Maybe.
I mean, honestly, maybe.
Like, I think that, I think it would be silly for us not to, like, build the infrastructure
to enable that.
I just don't, I don't know the time frame, but I do think we're going towards something
like that. What I really don't think, though, is like engineering expertise is going to become
devalued. Sure. So I think it's actually, in the short term, at least, it's more important to know what
you're doing as an engineer than it ever has been. And why do you say that? Is it because you need
to correct errors that the agents are making? Is it because things may be architected in a way
that isn't scalable? Is it something else? Totally. So it's like, it's like the agents, you can think
of them kind of as junior engineers. So if you didn't have someone who was senior
watching them, you end up in a situation where these agents will make code that creates bugs,
it could create security issues, it can cause your code base to become really unmaintainable.
And so there's actually like a premium right now on these senior engineer skills where you can
architect, where you can review code, you can make sure the system doesn't degrade.
And so again, I would be, if I were like early in my CS career, I would be racing towards building
that expertise where you don't want to be, I think is like someone who is just like perpetually
in the junior engineer state, because I do think that's at risk.
And then how do you think about different security tools?
Does that, so for example, there's tools like socket or SNCC or others who are basically
looking at, you know, where their code has, or open source packets or other things have
vulnerabilities in them.
Or they're looking at different aspects of security holes for code in general.
Do you think that just becomes part of these coding tools, or do you think there will
always be those sort of sandalone companies?
I'm just sort of curious how the overall...
It's an awesome question.
So I think tools like that become more important.
I think anything that does like either automatic security analysis or automatic verification,
I think actually like languages like rust, things that have stronger guarantees around safety by default,
where you don't need to rely on like a human reviewer become more valuable.
Whether those things like get integrated or bundled into the coding agents,
I actually don't have a strong take on.
I'm curious if you have a take.
but no, I think that the actual fundamental problem
becomes more important.
Yeah.
What do you think it's bundled?
What do you think it's bundled?
Like what sorts of tools do you think?
Because there's this whole world of DevTools.
Yep.
And there's a security aspect of it, but there's lots of others.
There's design-related things.
There's a huge spectrum.
Yeah.
What do you think just becomes part of coding tooling?
I think there's going to be a class of tools
where you sort of start from the front end.
This would be things like lovable or bolt or revoke
or maybe even Figma Make, if you're coming from the design side,
and you'll have, like, an all-in-one platform for build an app
or even, like, honestly, like, build a business, like, put payments in it.
It's kind of like the evolution of, like, either a Shopify storefront or, like,
WordPress or Squarespace or something like that.
So I think that's all going to be bundled.
More on, like, the core pro developer side, I can't tell if it's going to be a world of, like,
MCPs and integrations and like all these tools sort of interplay that's one approach or it's
going to be more like there's enough alpha and like you put all of these things together and
I think warp is a little bit more like this like we're trying to build a single pane of glass
for instance for doing like local agents and remote agents yeah and if you get a way better
developer experience through the bundling I think that that approach could win but I don't know
Like MCP, I think, is a pretty valuable approach as well, but it's not perfect because you end up like with this sort of secondhand data coming into all these tools.
Yeah, it's really interesting because if you look at different industries, early in the industry, things tend to be fragmented often, not always.
And then late in the evolution of an industry, things get bundled.
Yep.
And then when there's a technology disruption, things debundle again, and you have point apps and then they start bundling.
Yeah.
And that's just kind of like the cycle of technology in some sense.
Some things that are vertical right now, like I actually, I think will.
Like, for instance, like agentic code review, take that.
To me, that should be part of a, like, holistic, agentic development platform, not so much a standalone thing.
So I think some of these verticalized apps, there's just going to be, if you've gone through the trouble of building, like, a really, really excellent coding agent, which Warp is invested at a ton in this, that coding agent should be reviewing code.
It would be weird to plug in some other thing that needs to relearn all the context, the rules, the coding convention.
So I think there are definitely some things that will be bundled.
Makes sense.
Yeah, Shrain on my team has put together this matrix of companies versus features in the coding market.
And there's a lot of these sort of like single feature companies.
And it almost feels like all these things to consolidate into a small number of players over time just as they iterate for the product.
I think so.
And I think the core technology is like the harness.
So the thing that sits around the model.
And like I know the model companies are also investing heavily in this.
And then it's like the context.
And so if you have the rich context in your system,
you're going to find a lot of vertical applications
where, like, I think security checking is an interesting one.
Code review is definitely one.
Anything CI-related,
you're probably not going to want to use a bunch of different systems.
How do you think about it in the context?
If I look at other historical technology shifts,
the operating system or the platform often subsumes the biggest apps into itself.
So, for example, Microsoft OS, eventually they just bundled Windows on top.
of it. Those were the four main apps that were being used
the most on Windows, right? And similar
gaming was the other big app, so that's why they start Xbox
at Microsoft. If you look at Google and
vertical search, they eventually integrated
all the vertical searches into Google directly.
Totally. And so, in the context
of AI, one could argue that if the
foundation model companies follow the same approach,
they should bundle, or at least
attempt to bundle some of the biggest use cases.
The clearest big use case today is code.
Totally. And we already know that
Claude, or Anthropic has launched Cloud Code.
Open AI, you know, almost bought WinSurf.
It always had early coding stuff.
Microsoft, which we know are building some of their own models.
Obviously, have GitHub and co-pilot and all that.
And so do you think eventually those become some of the fiercest players in this market?
Or how do you kind of view forward-looking shifts in the market and where some of those functionality goes?
I mean, I do.
And I think they're clearly trying to do that playbook right now where they are seeing like, okay, if you consider them
platforms and they're looking at one of the most valuable applications that are being built
on top of the tokens. They are moving aggressively into coding with cloud code and codex,
which definitely is like as a startup is a little bit scary for sure. The question is like
do they have like that distribution advantage that say Microsoft had wherever. If everyone's
using Windows or everyone's coming to Google for the front door, I think it's pretty easy
already to add on the first-party app in place of the ecosystem.
I don't know that that exact same dynamic holds for coding right now.
The front door is kind of like, honestly, it's still like a native app that someone downloads
on their computer.
So that'd be the terminal or the IDE.
At the moment, yes.
I think controlling that is actually like a really interesting front door.
The other front door, which I feel like, honestly, they're not executing that great is GitHub where all the source code lives.
That would be like the locus of doing all this stuff that I think makes the most sense.
But right now, it's a weird dynamic where we're like we have people who are like running cloud code within warp.
And warp is sort of the outer app in that situation.
So I don't know.
You know, the other thing that I hope happens from our perspective is that there's a lot of competition at the model layer.
and that the sort of like intelligent tokens become a bit more of a commodity.
Right now, the models have a bunch of sort of like pricing power
because like there is a real delta between using the frontier model and using like the open source model.
But if at some point the models are good enough where coding is sort of, I think of it as like solved,
it's like good enough you don't need to be using the frontier model.
Then it's like, yeah, maybe they have an advantage just from brand and scale.
But I think the advantage is not as entrenched
to something where it's like literally the front door
like Facebook or Google or Windows
provides those other platforms.
That's a really good insight in terms of the way
that you launch activity or application
then drives what you use.
And so the hard part is often switching people off that.
And that's one of the reasons I think people think
Open AI has a strong competitive position
in the consumer world is because it's a default behavior
for a lot of people right now.
just start chat GPT and use it for something.
Yeah.
Which is different from like the model layer where there's more switching.
Like I think in consumer chat GPT has a huge advantage.
Like once that that behavior is default, even if like Claude is maybe better, I don't know
if it is or not.
Like everyone knows chat GPT.
I don't know if you saw opening I dev day yesterday, but they're clearly doing this platform
play within chat GPT now where it's like you have apps within chat TPT.
I'm sure they'll use that data to sort of subsume and or to.
take over whatever the best first-party integrations are.
So they're definitely doing that on consumer.
For developer, I don't know that that same dynamic is there.
What made you decide to focus on terminals?
So, you know, we started talking years ago when you first started doing warp.
Yep.
And even then, I think you had really interesting ideas about how to rethink the terminal
and how to use that as a launching point for all sorts of things.
Yeah.
Could you explain that thinking and how it's evolved over time?
Yeah.
So the basic insider thing that got me excited about Warp to begin with is like, you have
this tool that is pretty much a daily use tool for every developer. It's that and the code
editor. And the terminal itself is something that, you know, really hadn't changed much in the last
40 years. It's a tool also where it's like if you get good at it, you can really get a lot
done. If you use it, it like works across all these different parts of software development,
not just code writing. On the flip side, from my perspective, not a good product. Just like
hard to learn, hard to use, hard to remember commands, super intimidating and just like
kind of like a gatekeeping vibe around it as well in my opinion. And so the original
concept with Warp was like, let's build a better product there and see if people will
like using it. The business concept has evolved at a time. Like the original business concept
was like building a collaboration platform, which is like we've just changed our model to be
agent platform because it's like way more demand for that than a collaboration platform around
the terminal. But the sort of core insight that this is an important tool, it's crazy. It's actually
kind of invalidated through all these agentic things that are very terminal first.
You know, one thing that you mentioned that thought was interesting is that at some point
the model layer may commoditize in terms of its coding abilities. Yeah. How far on that
asymptote do you think we are? How close to that do you think we are? God, I don't know. I think that the
increasingly the limit that we see is context.
And like the reasoning capabilities of the models are pretty impressive.
The problem is like understanding an entire code base
or understanding sources outside of the code
or literally just understanding user intent are challenging problems.
I still think there's probably much more to do on the model side,
but I don't know if it is a short answer.
And do you think that from a model capabilities perspective,
we've hit a point where, to your point, it feels like certain aspects of the models are slowing down in terms of the benefits or outcomes of further investment of certain types at least.
I think so.
Like, if you take, like, Sonnet 4 to 4.5, and we're big partners with Anthropic.
They have great models.
Like, that was, like, a few percentage point increase on Sweet Bench for us.
And we invest, you know, we've invested a decent amount to be one of the top agents on Sweet Bench.
And when we went from 3-7 to 4, it was a much more significant boost.
So, again, that's like not, I don't know what that means about the total underlying trends.
I think something with GPT-5 was somewhat similar, like, certainly an upgrade.
And GPT-5, I think, is actually pretty much on par.
It has different, like, feel to it and higher latency.
But it didn't feel, to me, like, as much of a step-term.
changes some of the upgrades before.
Yeah, makes sense.
What other areas of the AI Debt world are you excited about?
I am, so I'm really excited about not just like the interactive piece of agents, the way
most people are working today, but what can you do if you can program against these agents?
And so, for instance, it's like if you have a version of warp that's like headless,
for instance, you can put it in CI and you can start to do crazy things where it's like,
okay, every time someone updates the code,
make sure the documentation stays up to date.
It's like, that's very annoying for a developer.
And so allowing developers to automate parts of their job
that they don't like doing,
I think is like a big capability.
And then from a business perspective,
just like automation is a better place to be
than productivity enhancement.
Like one of the challenges with our business,
I think with a lot of the coding businesses,
is just like proving the ROI.
Like, and there have been these studies,
that show, like, you know, you deploy this stuff on real code basis. It's kind of unclear
if it's actually having an impact, whereas if you get something that's more outcome-oriented
or more just like an automation, I think it's easier to prove the ROI. And then you're also
not limited by time spent behind keyboard for doing this type of stuff. So from a business
perspective, I'm very excited about, like, what's unlocked if developers can program these
agents. Well, that's fascinating. Thank you so much for joining us, I said Anna Pryor's.
Thank you for having me. This was great.
Find us on Twitter at No PryorsPod.
Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no dash priors.com.
