Screaming in the Cloud - Five Slot Machines at Once: Chris Weichel on the Future of Software Development
Episode Date: October 2, 2025On this episode of Screaming in the Cloud, Corey welcomes back Chris Weichel, CTO of Ona (formerly Gitpod). Chris explains the rebrand and why Ona is building for a future where coding agents..., not just humans, write software.They discuss what changes when agents spin up environments, why multi-agent workflows feel addictive, and how Ona is solving the scaling and safety challenges behind it.If you’re curious about the next wave of software engineering and how AI will reshape developer tools, this episode is for you.About Chris: Chris Weichel is the Chief Technology Officer at Ona (formerly Gitpod), where he leads the engineering team behind the company’s cloud-native development platform. With more than two decades of experience spanning software engineering and human–computer interaction, Chris brings a rare combination of technical depth and user-centered perspective to the systems he helps design and build.He is passionate about creating technology that empowers people and tackling complex engineering challenges. His expertise in cloud-native architecture, programming, and digital fabrication has earned him multiple publications, patents, and industry awards. Chris is continually exploring new opportunities to apply his broad skill set and enthusiasm for building transformative technology in both commercial and research settings.Show Highlights(00:00) Introduction to Modern Software Interfaces(00:55) Welcome to Screaming in the Cloud(01:02) Introducing Chris Weichel and Ona(02:23) The Evolution from Git Pod to Ona(03:26) Challenges and Insights on Company Renaming(05:16) The Changing Landscape of Software Engineering(05:54) The Role of AI in Code Generation(12:04) The Importance of Development Environments(15:44) The Future of Software Development with Ona(21:31) Practical Applications and Challenges of AI Agents(30:01) The Economics of AI in Software Development(38:11) The Future Vision for Ona(39:41) Conclusion and Contact InformationLinks: Christian Weichel LinkedIn: https://www.linkedin.com/in/christian-weichel-740b4224/?originalSubdomain=deOna: https://ona.com/https://csweichel.de/Sponsor: Ona: https://ona.com/
Transcript
Discussion (0)
Fundamentally, the interfaces that we as software engineers use today aren't built for this.
They're built to do one thing very deeply at a time, writing code, right?
Now, our interfaces need to change and the environments in which we do that work needs to change.
My laptop is built to do one thing at a time.
I mean, anyone who's tried to run different Python versions on one machine knows what I'm talking about.
And so are my IDs.
So my environments and my interfaces.
is need to change to get the productivity out of these agents.
And that's very fundamentally what owner does,
is it gives you as many of these environments as you need,
perfectly set up for the task at hand.
And it gives you an interface that helps you find flow and joy
in doing more things in parallel.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
and my guest today has been on the show before.
Chris Weichel is the CTO of ONA,
which we have not spoken about on the show before,
because once upon a time until recently,
they were known as GitPod.
Chris, thank you for returning.
Thank you for having me again.
This episode is brought to you by ONA, formerly GitPod.
Are you tired of coding agents pushing your S3 bucket
to the wrong AWS account
or having them commit your entire downloads folder
because they thought your tax documents were part of the codebase,
all while using privacy policies that basically say
that your customer data deserves a nice vacation on random cloud servers,
introducing Ona, where your coding agents run in isolated sandboxes
securely within your own VPC.
Ona lets you run agents at scale with workflows, bulk open requests
that finally tackle that Java migration that you started in 2019,
or automatically fix CTEs when your scans find them.
ONA also supports private AI models through Amazon Bedrock that your corporate overlords might even approve of.
Head to ONA.com and get $200 of free credits using the code screaming in the cloud because your laptop wasn't designed to babysit over caffeinated rogue coding agents with root access.
As you might have picked up from that intro, I have a leading question I would like to begin with.
GitPod was an interesting name of the company because it was, it was like GitHub, no, actually.
And you sort of got to a point of understanding what it was.
And now it's all that work we did to teach you what that word was and that it was pronounced with a hard G instead of a soft.
Well, now we're changing it again to something else.
Why?
There's a number of reasons.
One is that GitPod as a name really doesn't make sense.
anymore. One, we famously left Kubernetes, so the pod part is out, and Git isn't at the center
of it all. It's a very important piece of technology, for sure, but so much of what you can do with
Git now, owner isn't centered around Git anymore. So the name really has become a bit of a
misnomer. And to be frank, the amount of times we've been confused for GitHub or GitLab,
or spelled with a capital P for no apparent reason, I'm just very glad.
we can leave all that behind us. So hence to rename. I am always a little bit leery of company
renames. And that is in many ways unfair to you. The one that sticks out in my mind was Mesosphere
after they renamed to D2 IQ. And even now I had to look that up to make sure I was getting those
letters correctly. And it turns out that the correct name is now acquired by Newtonics.
So okay, it's brand equity is super freaking hard. It is. It is.
is, it takes a long time to teach people things and, okay, we're going to be changing our name,
our logos, et cetera, is hard. I saw that Facebook was able to do that to meta and I would have
bet anything that that wouldn't have worked because it's been how many years since Google did that
with alphabet? And every time in a newspaper article to this day that we see anything about it,
it is alphabet, parentheses, Google's parent company, close parent. It's, it's one of those things
where sometimes it sticks,
but usually it feels like
it's going to have that parenthetical forever.
What's your sense on this one?
My take on this is, as a company,
if you want to rename,
if you're small enough,
it doesn't matter because no one knows you.
If you're big enough,
everyone's going to hear about it.
So, you know, it's fine if you do.
And then there's sort of the trough in the middle
where it's a bit hit and miss.
I think for us,
the main reason we did that is
because we're really at the precipice
of a pretty fundamental change
on how software is written.
With that, like,
Ona isn't just a rename.
It's really a refounding of what it is that we do.
It isn't a pivot.
You know, it's not like we're doing something else.
But it marks a new chapter on this trajectory that we've been on since the inception of the company.
And with that, we also want to be known for leading where we're going as software engineering as a whole.
And so the new name signifies that ambition.
Normally, I would discount this to be direct as, oh, well, everything is.
changing about software engineering, is it though? But I've been beating code into submission for
longer than is appropriate, given how terrible my code still is. And I think that it is
difficult to make the straight-faced assertion that nothing is different about writing code
in 2025 than it was back in 2020. The world has foundationally changed. You can debate where
AI is making inroads versus not, but one area in which it has excelled has been in
code generation.
Absolutely.
And the way we think about this is, really, we've gone through three waves of how we
write code.
And the very first one is where we've essentially artistically handcrafted every single
line, save for code generation and autocomplete.
And this is how we've been writing code for really the longest time, certainly since I
can remember.
And a few years ago, when AI first entered the scene, we started to have code editors like
copilot and the likes that gave us better autocomplete.
And they, you know, made the time shorter that it, they reduced the time that it took to write code, but they didn't fundamentally change the pattern.
You know, it was still a human sitting there typing stuff and hitting tap tap every once in a while to get better code, or I don't know, better, but more at least.
And not too long ago, essentially agents entered the scene, and they very fundamentally changed the pattern.
Because now it's no longer humans writing code, but it's machines writing code.
And to what extent and how much and how well, that's all debatable.
Like, we haven't talked about that.
But certainly, the truth of the matter is that now we have these things that can write and modify code for us at a level of abstraction that's arguably a level up from the programming languages we've been using this far.
And so that's a very fundamental change in how software is being written, not unlike, you know, changes from assembly code to higher level languages to see and the likes to object oriented languages to now.
I mean, you know, it's almost a beaten sentence or beaten saying that English is a new programming language.
I don't believe that to be true.
That's not the thing.
Because we're bad at that one too.
Yeah, exactly.
But certainly we, I mean, me too.
Under specification is a key problem.
And that is still so.
So I'm not saying this is a new language, this is a new abstraction, but it is a way we communicate now about code.
And it's a very fundamental to a machine.
It's a very fundamentally different way how we interact with code.
I keep observing that I don't know how to live in this current world that we're in
because we spent enough money and made the computers expensive and powerful enough
that they are simultaneously capable of doing what we mean instead of what we say
and are bad at math while they do it.
So it's this, I don't fully understand this world I find myself in.
And I'm starting to wonder, does this mean that I finally live too long?
And maybe other people would argue that I definitely have.
But it's, like, I have young children.
And they, like, how do I explain to them how computers work on a month-to-month basis?
It's shifting under me.
It certainly moves very, very quickly.
I mean, we're recording this, at the time that we're recording,
this literally Sonnet 4.5 just dropped.
Yeah, within the last hour of us whacking the record button.
So we have no idea whether it's good, whether it's bad,
who supports at the moment, it's just anthropic out there alone.
I'm sure all the Me Too, we support this now,
we'll come piling in as we literally speak.
But it's, it is weird because state of the art is still moving rapidly.
It's not the meteoric growth curve.
It's been over the last couple of years.
Things have slowed down now, but it is definitely still showing the ability to surprise us.
Oh, absolutely.
And, you know, the half hour before this show, I literally had on an ad sonnet 4.5 support to itself.
See, okay, the first product I've heard of supporting it is you.
Good work.
Your timing is excellent.
Now, I have to ask, and a bit of confession of my own, we are in the process of renaming our company
from the Duck Bill group to simply Duck Bill.
as we expand into a services offering as well as pure service,
to software offering as well as pure services,
the group does not really carry the same weight.
And internally, it is hard for us to correct ourselves
after eight years of inertia of saying it the way that we have.
So my two questions for you are,
one, do you still find yourself referring to the company
as GitPod internally?
And two, if I were to do a grep and a word count
of the term GitPod in your codebase,
how many would I find?
Okay, do I still say Gipot every once in a while?
I do, but surprisingly little.
I expected it to be a lot more.
The general save is Gipot, now, ONA, and then you carry on.
In terms of the word count, if we looked at the ratio of Gipot to Ona and our codebase,
it's orders of magnitudes more Gipot than Ona.
We have a, yeah, we had an,
early working name of our product for the first three weeks we were building it.
And that legacy name is still in our code base because it, those, yeah, I'll fix that later
naming decisions become load bearing.
We don't think anything is going to break if we just do a global find and replace at the same
time, but it might.
So that's a question of, okay, how much extra work to want to create for ourselves today?
We're going to keep kicking that can down the road.
Surely this problem won't get worse with time.
I mean, we have customers who obviously rely on our API, and we're not going to break them.
You know, our API contracts are wholly to us.
We won't break them.
So we will have Gipot in our co-base for all eternity.
The ratio is going to shift.
Yeah.
And it has to.
Has the product itself changed significantly?
That's the other question, because I find that shifting names is, if it's not an exactly
at atomic operation, it's pretty close.
I mean, you only have one logo simultaneously in the upper left-hand corner,
but the product itself has to simultaneously serve the use case
that it has been sold to solve for before,
but also pivoting to embrace new things.
I will say I give you folks credit more so than I do most companies.
Everyone now is slapped AI on the above the fold on their landing page.
And like, we're an AI company and have been for years.
Funny, because I look back three years ago at your conference.
talks. I see no mention of it, but we'll let that slide. In your case, you have taken that deeper.
You have renamed the company. You have made a public declaration that this is what we're
about. And whether it is the right path or the wrong path, no one can deny that you're committing
to it. Yeah, the thing that we've been building for a very long time now is essentially
the automation of development environments. It's the ability to create a development environment
with a click of a button, something that is incredibly useful for humans because it removes
a lot of work and toil from setting up development environments and maintaining them,
five hours per week studies and data show.
And that's very helpful for humans and it's existential for agents.
Really, if you want an agent to scale beyond your machine and you want to run five of them
in parallel, or even just avoid that agent accidentally sending an email to your boss,
having some unkind words or accessing production,
because all of this happens to be available on the same laptop you run your terminal agent in,
if you want to avoid all that, you need to put them in isolated, readily set up development environments.
You are not wrong.
I have problems with Cursor constantly because I have set up my ZSH prompt
to reflect what I need as a human being editing the thing.
It uses some power line nonsense and some other stuff as well.
because I had, you know, an afternoon to kill.
And I now, in most terminal environments, until it gets set up, it has glyphs that don't render
properly.
It has fonts that aren't present.
And as a result, everything looks janky and broken in most of these tools because I have gotten
my shell working for me as a human.
Computers have not yet caught up to that.
Absolutely.
There's a reason why, you know, Claude calls it dangerously skip permissions if you wanted to give,
if you want to give it a blanket check to do anything and everything.
I can't run that on my laptop.
I have client data there.
It is a hard stop.
So I give it its own dedicated EC2 instance and for one side project in its own
unbounded AWS account via instance role.
So there's dangerous and then there's whatever the hell this is with basically an unbounded
blank check to go ahead and spin up NAC gateways to its heart's content.
There's no way this will wind up being a hilariously expensive joke at my expense.
Yeah, that's a brave choice.
There I say, a slightly more sensible choice is to have this in a controlled, guarded development environment set up.
And that's where fundamentally what Ona is and what we built at Gitplot for a long time and now extended for agents.
So the heart of the product that is the environment remains.
We now speak of that as Ona environments.
And within these environments, we run an agent, Ona agent.
that does its work, and it's subject to the same guardrails
that previously existed for these environments
plus specific agent guardrails.
So you can decide what it has access to.
If you want to, you can give it unbounded access to your AWS account.
I would not recommend that.
By default, obviously, it comes locked down as same defaults.
But the key point here is we rename the company
because it signified the next step on the trajectory
we've been on all along.
It's not a pivot.
It's not a random addition.
Oh, shoot, we've got to do something with AI.
It's so naturally followed that these development environments that we built for humans also
work very well for machines.
In fact, when we architected the platform, we thought of machine use cases, not necessarily
agents at the time, but it was clear that there be more machine-to-machine use cases that
become relevant that also need development environments.
And that fit the bill so perfectly now.
There's a lot to be said for the ability for systems to interface with each other well.
I would argue that MCP is potentially a revolution in its infancy just because now you have a, it goes beyond APIs.
These are things that self-describe in a parsible way to each other what the tool is, what this endpoint lets you do.
That has legs that extend far beyond a particular iteration of these things.
It's effectively, from my old person perspective, it's the sense of what if every time you connect to an endpoint, it would give you the equivalent of a man page that told you what it did, how it worked, what arguments it could take, and best results do the following.
That is non-trivial.
I'm sort of annoyed.
We didn't come up with that as a standard long before now.
I mean, you know, at least you didn't try to push the semantic web for decades.
Like, I'm pretty sure there are some people who, you know, who are even more annoyed at the success.
of something as simple as MCP than you are.
It's the, I think part of the problem
and the reason we're seeing at work here
is you cannot universally change
the way the humans interact with something.
Source, people will still be calling
you GitPod 20 years from now
in some corners of the world.
But when you have a shift that's powered by LLMs,
suddenly there is that sort of global context
and Overton window that moves extraordinarily rapidly.
In fact, that's one of the challenges
I suspect you'll have is it's going to take some time
for LLMs themselves to get word of the name change.
I found that whenever I'm building something new
and just vibe-coding something shitposty,
it'll often park it on Versel for a front-end.
Now, I don't have strong opinions about front-end.
I just know I'm bad at it globally.
But that's the one the LLM picks,
and like I'm going to correct the robot, please.
Absolutely.
Like the name GitPot is essentially before the cutoff
of most models right now,
but then that too will change.
Obviously, there are new models.
I mean, you know, son of 4.5 just dropped.
So that too will change and the models will adapt and learn a new thing.
That said, I actually like the idea that we are so well known that even 20 years from now,
someone is going to refer to us as Gipod.
The question is whether that is because people are actively using it then
or someone is just so ornery and obstinate that they refuse to accept that anything after
2023 exists.
I'm starting to see the joys of being.
being a curmudgeon. So these days, now, since people have to take a step back and ask the
question a little bit differently, since I imagine that the nuances of the answer are there,
what does Ona do? Very fundamentally, the thinking goes, we now have these machines that can do
work for us, you know, that we can give a task and to varying degrees of autonomy can do work for us.
A mental model that we found very helpful is time between disengagement. It's a mental model coming
from self-driving cars, and it describes the time between the car disengaging and the human
having to take over.
It's a measure of autonomy.
And seconds is essentially lane assist, and minutes to ours is backseat of a waymo.
With agents, we're seeing the same thing.
You know, we're coming from this tap-tap auto-complete, lane assist, and we're moving to minutes,
hours of sensible autonomous work.
Cloud code, codex, owner agent, all demonstrate that.
Now, the question then is, how do we turn this increasing autonomy into productivity?
Because that's obviously what we're asked for.
Fundamentally, software creation is an economic endeavor.
So, you know, it needs to be economical.
How can we turn this into more productivity?
And the only way we can really do that is by doing more things in parallel.
If I now need to sit there and watch the agent do its thing, I didn't gain much because
it's my time as a human that's expensive.
It's human attention that's expensive.
So how do we scale human?
attention fundamentally.
And again, the only way we can do this is by doing more things in parallel.
Fundamentally, the interfaces that we as software engineers use today aren't built for this.
They're built to do one thing very deeply at a time, writing code, right?
Now, our interfaces need to change and the environments in which we do that work needs to
change.
My laptop is built to do one thing at a time.
I mean, anyone who's tried to run different Python versions on one machine knows what I'm talking
about.
And so are my IDs, so my environments and my interfaces need to change to get the productivity out of these agents.
And that's very fundamentally what Ona does, is it gives you as many of these environments as you need, perfectly set up for the task at hand.
And it gives you an interface that helps you find flow and joy in doing more things in parallel.
This episode is brought to you by Ona, formerly GitPod.
Are you tired of coding agents pushing your S3 bucket to the wrong AWS account or having
them commit your entire downloads folder because they thought your tax documents were part
of the code base, all while using privacy policies that basically say that your customer
data deserves a nice vacation on random cloud servers, introducing ONA, where your coding agents
run in isolated sandboxes securely within your own VPC.
Ona lets you run agents at scale with workflows, bulk open requests that finally tackle that
Java migration that you started in 2019 or automatically fix CDEs when your scans find them.
Ona also supports private AI models through Amazon Bedrock that your corporate overlords might
even approve of.
Head to Ona.com and get $200 of free credits using the code screaming in the cloud because
your laptop wasn't designed to babysit over caffeinated rogue coding agents with root access.
At some level, I'm starting to feel that my ADHD inattentiveness and pivoting from thing to thing to thing
has become something of an asset when you have agent-driven stuff.
I would like it a little bit more if there were a healthy medium somewhere between you have full access to
everything, go ahead and never ask for feedback versus, oh, am I allowed to read this file that I just
wrote?
There's a, there is a different, there's a sliding scale of comfort with it.
the things for which I wish to be interrupted and need to have
give human input on. And conversely, there are times I see it doing things where I have to
see how fast I can hit Control C because no, no, no, no, no. I happen to know that
sort of thing very well and down that path lies madness. Absolutely. I think
there are two key elements that you brought up here. One is
globally, what is the thing allowed to do and what isn't it allowed to do? And right
now, you know, we're as an industry, we're working with these reasonably simplistic
deny lists, you know, where you tell an agent, hey, you're not allowed to run AWS because I don't want
you to drop my production RDS instance. But the agent is going to get very, very clever and
doesn't care about compliance at all. You know, agents don't care about getting fired. So it's
going to try and still make it happen. I've worked with people like that. Please, continue.
Yeah, it's not only agents. So, you know, just denying, hey, you can't run the AWS command
isn't going to do much good. It needs to go deeper than that. And that's something that we're very,
that we're exploring right now.
Like, how can we bake that into the environment?
How can we make these guardrails more sophisticated?
That's one.
The other is, if you're doing five things in parallel,
how do you steer this agent?
How do you get good feedback?
How do you give good feedback?
And here we're, I think we've hit a very nice form factor
that lets you guide the agent as it does its work.
It's going to pick up your messages when it thinks it's the right time.
and we've worked hard on making sort of an emergency stop button.
Like you can hit escape and it's going to stop dead in its tracks
because it's really, really important for you to retain control
over what the thing is doing.
There's also this idea that it is forcing in some ways rigor
that I am seeing people actually care about making things reproducible of,
huh, I really will need a rollback strategy here
instead of hand-waving my way around it
because sometimes it'll do disastrous things.
And we've seen some public examples of it doing those sorts of things, where it becomes really clear that people have paid insufficient attention to a lot of these.
Like, hey, it just deleted my entire database.
What do I do about that?
Well, ideally, you make different slash better choices.
Absolutely.
Like, one interesting effect of this is I now raise PRs that I need to review myself.
So I have an A to Invite code, create a draft PR, and then I review that draft PR as though it was written myself.
someone else.
So code that has my name on it, you know, now I need to make sure that it's worthy
of having my name on it.
Like, it's still my reputation on the line here.
And so there is an interesting change in dynamic.
One other thing is it's actually incredibly addictive.
Like, for a long time, I was really worried about how are we going to find joy and flow
in this multitasking, ADHD feeding, what sounds?
like a nightmare, to be honest.
Like, you know, had someone told me two years ago that, hey, the thing you're really going
to do is you're going to work on five things at the same time.
I would have taught that person to insert expletive here, you know?
Yes.
Now, so for me, this has been a really interesting question is how can we find flow enjoying
this?
And it turns out that, one, it's an interface question, but also, as software engineers, arguably
we have a somewhat addictive, addictive, if that's the word.
mindset to begin with because, you know, who there's just one more change and then my tests are
going to pass. Just one more change and then it's going to work. How many nights have we spent doing
that? So arguably there's some addictive pattern here already. We're essentially playing a slot
machine, you know, just one more change and it's going to work. What agents have done is they've
made it incredibly cheap to play five slot machines at the same time. Yeah, that's a good way
of putting it. It's so addictive that I've contemplated adding parental controls for myself.
I've seen Git worktrees being used explicitly for this, where you can check out different branches to different directories and let these things run in parallel on other different issues, or, all right, we're going to have a bake-off and see which one of you comes up with the best answer.
What I'm waiting for is the agent now that supervises those things and makes those evaluations.
Like, I want some, I want like the project manager at this point.
Something that can say, this doesn't pass muster, or, okay, here's a whole bunch of tasks.
I'm trying to one-shot it.
We're going to break it down and pass it out to each of you in sequence.
I think this is a very interesting space.
Like the sort of multi-agent interaction, I don't think anyone's correct that yet.
There are very interesting ideas out there.
This is certainly something that will come.
Also, what we see right now is a key skill now is to really decompose and break down a problem into a chunk that works for an agent.
Like agents are tools, and so you need to learn how to use them, how to prompt them, how to use them well, what size of problem they can attack.
you know, doing this decomposition is a very valuable skill right now that we'd obviously all
want to be able to outsource to yet another agent.
That is a constant problem we're all dealing with right now.
It's a universal problem where we are pushing the frontier bounds here and seeing what's
possible.
I think if you've only played with this stuff a few months ago and like, it was okay,
it's time to reevaluate it.
This is one of those rapidly advancing areas.
And I generally want to call out hype when I see it.
yes, we are in a hype bubble here.
I think that is not particularly controversial.
But unlike the insane blockchain hype bubble,
there's clearly something of value here.
That is, this is not solution in search of a problem in quite the same way.
This is something that is transforming the way some things are being done.
Now, maybe we're little too eager to map those to everything else,
but there is some kernel here of this has staying power.
Absolutely.
And, you know, is our agents,
going to replace humans? I personally don't think so. They're going to augment humans. They're
going to make people more effective, but they're not going to replace them. Also, Javin's
paradox is very real. The moment we make something cheaper, we do more of it. So we're now making
software production cheaper, so we're going to do more of it. Simply, we're going to write more
software. We're going to write more software. Historically, that has been the anti-pattern. Think
about this, where it used to be that, oh, we're going to solve our own custom problem in
house. We're going to write it ourselves.
I've worked in too many environments where there's such a strong not invented here syndrome that
everyone builds custom stuff but becomes a maintenance nightmare.
So it turned into a point, a lot of shops, my own included, where we historically have been
down this path of we're going to build our own custom tooling.
For my newsletter, it is a rat's nest nightmare of different things bolted together to build
a production system.
And when someone asked me why I didn't use curated.co, my question was, wait, why I didn't use
what?
Because I didn't know it existed or I would have.
and it would have saved me so much effort.
But we're seeing that invert now,
where there's a bunch of little things
that I need to do throughout the course of my workday.
I am not going to hire developer to do these things,
and I'm not going to sit around and build all of these tools
or pay for these things.
But, hey, every week, I need to find my top 10 most engaged posts
on Blue Sky so I can put it in the hidden newsletter Easter egg
that's in every episode.
I can write a dumb script that does that in,
I tell an agent to do it,
Now, I go get a cup of coffee and it's done by the time I get back.
Suddenly writing more software is the change.
For the first time, non-sarcastically, that'll fix it.
Because usually that's a sarcastic thing to say, oh, I'm going to write some more software.
Great, that'll fix it.
This will fix it because it's the glue between things.
Absolutely.
Also, we no longer need to excessively generalize because the creation of software has become so cheap.
I can solve this one specific problem, and I don't need to solve it for these other three instances
because, you know, I can just ask an agent to solve it for this, for these other three
specific instances specifically. And so software becomes more and simpler that way.
It also changes the way that I think we view the costs of doing software. The pricing models
for all these agent things are very strange. I've seen the leaderboards for people who are
using the $200 a month Claude subscription and how much value they're getting out of it if they were
paying per inference. It's tens of thousands of dollars a month in some cases. It makes me
worry that, okay, is this as economically sustainable as I want it to be?
Because I'm not going back to writing JavaScript by hand.
I'm just not.
So I'm very interested in getting local inference to a point where it can at least do the fancy
tab complete style of thing, even if it's not as good as the frontier models.
There are many things I don't need it to reach out to the internet for.
I don't need the very latest and greatest, Claude Sonnet 4.5, to go ahead and indent my
YAML properly.
I feel like that's the sort of thing that a model.
from three years ago can do.
There's that
token short squeeze article
that was all the hype on the orange website
not too long ago.
And the key premise of it is
that tokens get ever
more, get ever cheaper and cheaper.
So if you just look at GPT4 level,
so LMSS-ELO-130,
intelligence a year ago,
as compared to now, it dropped by a factor
of 140.
At the same time, we're using about
10K more tokens.
So we're using two orders of magnitudes
more tokens than the price dropped.
Unnecessarily will need to see two things.
One is, as you point out, more precise models
that make that cost intelligence straight off
to a point where it works.
Like this one size fits all isn't going to scale.
The other is we'll need to recognize that AI doesn't
make the creation of software free,
it just changes the economics.
So scaling a model is much easier than scaling humans.
And this is why we can produce more software,
but that doesn't make it free.
And this time right now where we live in VC money,
subsidized token land, will need to come to an end eventually.
So I think we're going to see a proliferation of different models
that make that trade-off better, and we'll
need to see, and we are seeing already, like pricing models that are much more aligned with
the value you're getting rather than a flat fee. Yes and no, because we're not seeing outcome-based
pricing on any of these things. It's not like, okay, I'll only charge you if the code works.
Like that, that would be an interesting gamble, but I don't know anyone who'd want to take the other
side. That's a really tough one. Finding a way to make this one really work, I think is extremely
interesting because it aligns incentives so, so well. The question is, what is the outcome?
You know, like codeworking, an agent can show you that the code works.
Does it do the right thing?
I don't know.
Does it solve your business problem?
No idea.
So the, you know, what is the outcome you're optimizing for?
Which is why I reckon most don't price this way yet, because it's incredibly tough nut crack.
Yeah.
I think that this is where some of the most interesting stuff is yet to come.
So I've been doing a lot of weird work lately in random,
shit posting things. And it's great watching it just get done and wait for me. In some cases,
it'll even ping me when it's ready if I hook it into the right notification service.
But I've been doing it hanging out on an EC2 instance. And it's doing that in a T-Mox section.
A T-Mux session. There we go. And that's great, but it's a colossal pain of the butt to do that
from blink. I can do it, but it's not pleasant and it makes me sad. Do you see a future where this
gets easier to be done on mobile devices as we're out walking around, not staring at the big
screen, instead looking at the smaller, happier screen.
Actually, this is already a reality.
So with Ona spinning up development environments that aren't bound to the machine you're using
owner from, you can absolutely use it from your phone.
And in fact, we've optimized the web experience also for mobile.
The way I speak about this now is, like, I'm three times more productive on my phone than I was
six months ago on my laptop.
Like, let me make this very concrete.
At this point, I have a four months old son and many evenings, I'll sit with him on one
arm as he's fallen asleep, but I don't dare put him down.
Oh, can't do that.
That restarts the cycle.
Exactly, exactly.
Then I have to, you know, shush him and try and put him to sleep again.
So clearly I can't use my laptop, but I can use my phone.
And so many ideas for prototypes or.
actual changes that before would have been mirror notes, now are actual prototypes. I put them into
ONA, and by the time I wake up the next morning, the conversations I've had turned into
actionable code. And that's a very fundamentally different way of working. So being able to do this
for mobile is already reality, and you don't have to use T-Mux or screen to do it. Yeah, with the weird
control characters and custom keyboards and the rest. Okay, you convince me to try it out. A question I have
for you that I've encountered a fair bit here is the multimodal approach to these things.
I can tell an agent to build a thing.
It can go vibe code.
It's hard out.
Great.
To the point where I'll even find myself stuck in that paradigm for things I really shouldn't be.
Like, oh, go ahead and change this one string here because I want to change the capitalization
of something.
I should just be able to pop into VI or whatnot or edit that.
It feels like I have to pick a paradigm and stick with it, maybe past the point where it makes
logical sense.
How do you see that?
Yeah, a lot of agents really are built for a future that isn't here yet and maybe never will be where the agent goes 100% of the way.
And I guess the set of problems for which this is true is increasing as agents get more capable.
But there are some things that LM simply aren't good at all where Salsco just is the better way of specifying it.
So if I want that color to be green instead of red, it's much more likely that, you know, changing the hex,
myself, is faster than trying to describe that to an agent.
Ona is very much built around that idea
where you can engage with code at the right level.
You can choose to not engage with it at all directly
and simply be in the conversation,
or you can fold open a side panel
and there is VS code right there in the web
on the exact same environment.
And if that's not enough,
you can open a classic IDE,
Emacs if you have 12 fingers or VI or VS code,
and Kosa, if you want,
want and interact with that code more deeply.
So in the same environment, I very much believe that agents get you very far and they'll
go further and further, but there needs to be a way to engage with the code at that level.
Yeah, right now it just feels like that's the expensive context, which almost as much as switching
between entire projects, which I've gotten used to.
But the, ooh, different tool now, it feels like even the key bindings feel different and
I don't like it.
Absolutely.
You also want that conversation to be there.
You know, what you don't want is to now go and, like, say you open an editor and all of a sudden, all your conversation, all that context, no pun intended, it's gone.
You really want that continuity between these different levels of engagement.
Yeah.
And then there's the other problem, too, of, all right, when do I want to get rid of that context and start fresh on this codebase and have it take a different approach?
There's no right answer yet.
Absolutely.
I think this is really where it comes back to learning.
how to use this tool and the tool making it easy for you to work with it.
So, for example, we essentially copied Claude Codd Codes Clear commands.
So in Ono, also you can just go slash clear and it's going to reset the conversation.
It's features like that, but also behavior like that that I think will change over time as agents become more capable and as we all learn what the right ergonomics are for these tools.
It is still an evolving space.
So I guess my closing question for you is, in that future, as we see this evolving, what place does Ona stand in?
Ona very fundamentally is the mission control, the platform for humans and agents writing software.
And that's where we stand.
And 99% of the software isn't written on weekends, but it's written in enterprises.
It's written in large organizations.
and that's who we serve.
Like, we want to be able to bring these technologies
and this way of working to everyone.
And if you're a weekend warrior, please go try owner.
You know, go to owner.com, sign up, try it.
Use it.
It works well for you.
If you work at an enterprise, use owner.
And this is the thing that I find really exciting
that we can say this.
What we're really looking to do is to bring owner environments,
agents, two folks in regulated industries and large organizations who right now really struggle
to get these tools in-house.
You know, as an engineer, of course I want the latest tools.
Of course I do.
My CISO might not be so happy with me putting my company's source or this company's source
code into some arbitrary cloud or untractable LLM.
And so where owner stands is bringing these tools and capabilities to large,
organizations and individuals like.
I like that.
I am curious to see how this story continues to evolve.
I really want to thank you for taking the time to speak with me.
If people want to learn more, where is the best place for them to find you?
Best places to head over toona.com.
Check out the product right there and then.
Also, of course, Twitter, LinkedIn, the usual places to reach out.
And thank you so much for having me.
No, and thank you.
Chris Weichel.
co-founder and CTO at Ona, I'm cloud economist Cory Quinn, and this is screaming in the cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice,
whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment that you don't even have to write.
You'll let the LLM do it for you, and don't worry, it'll probably turn out fine.
Thank you.
We're going to be.
Thank you.
I don't know.
Thank you.
Thank you.
I don't know.