The Changelog: Software Development, Open Source - The mythical agent-month (News)
Episode Date: February 23, 2026Wes McKinney on the mythical agent-month, install Peon Ping to employ a Peon today, Andreas Kling explains why Ladybird is adopting Rust, Cloudflare has a new MCP server that's quite efficient, and El...liot Bonneville thinks the only moat left is money.
Transcript
Discussion (0)
What up, nerds?
I'm Jared, and this is ChangeLog News for the week of Monday, February 23rd, 2026.
Today we have some ChangeLog News news.
This is my final episode.
Starting next week, my good friend, Adam Stokoviac, will be taking care of you.
Yes, after 13 years, 1,042 podcasts.
452 newsletters and countless friends made along the way, it is time for a change.
I will write more about this decision and the future on my blog, and we'll discuss it together
on Friday's Friends episode, which will also be my last.
Until then, thanks for logging with me and definitely don't unsubscribe.
I'm sure Adam's changelog news will be great.
Okay, let's get in to this week's news.
The Mythical Agent Month.
Wes McKinney has been wondering what many of us have been wondering.
Quote, among my inner circle of engineering and data science friends,
there is a lot of discussion about how long our competitive edge as humans will last.
Will having good ideas, and lots of them, still matter as the agents begin having better ideas themselves?
End quote.
For now, Wes feels needed, but with things changing so rapidly, he wonders how much software
engineering's past will inform software engineering's future.
With that in mind, he decided to revisit one of his and my favorite books on the topic,
Fred Brooks's The Mythical Man Month.
In so doing, West discovered that the book's themes are still highly relevant in agentic software
and that its follow-up, no silver bullet, predicts the exact problem he's having in his
agentic engineering.
Quote, the accidental complexity is no problem at all anymore, but what's left is the
essential complexity, which was always the hard part.
Agents can't reliably tell the difference.
End quote. Thought-provoking stuff definitely worth a read.
a peon today. I clicked the copy to clipboard button on the brew install command approximately
0.3 seconds after landing on this peon ping website. What could possibly be so compelling?
Quote, game character voice lines the instant your AI agent finishes or needs permission.
It works with Claudec, Codex, cursor, open code, Kiro, wind surf, anti-gravity, and more,
never lose flow to a silent terminal again. End quote. With 95 plus sound pass.
and counting, there's a game character voice in here for everyone.
I sampled a bunch, but I'm sticking with a default Warcraft 3 orc peon for now.
Ready to work.
Give it a try. There is a 0% chance.
This won't bring some joy to your work life.
Me, not that kind of orc.
Lady Bird adopts Rust.
One popular segment of our Lady Bird pod with Andreas Kling and Chris Wonstroth was when Andreas
Andreas told us they were leaning towards Swift as their C++ replacement.
well, well, well.
How the turntables.
Quote, we previously explored Swift, but the C++ interop never quite got there,
and platform support outside the Apple ecosystem was limited.
Rust is a different story.
When we originally evaluated Rust back in 2024, we rejected it because it's not great
at C++-style object-oriented programming.
The web platform object model inherits a lot of 1990s OOP flavor,
with garbage collection, deep inheritance hierarchies, and so on.
Rust's ownership model is not an actual fit for that,
but after another year of treading water,
it's time to make the pragmatic choice.
Rust has the ecosystem and the safety guarantees we need.
Both Firefox and Chromium have already begun introducing Rust into their codebases,
and we think it's the right choice for Lady Bird 2.
It's now time for sponsored news.
What spec-driven development gets wrong.
Speck-driven development has a decay problem.
Design docs go stale soon after they're written,
and nobody gets rewarded for key.
keeping them current. That was annoying before and now it's getting dangerous. AI agents are following
stale specs confidently, executing plans, misaligned with assumed reality without ever flagging
the drift. Here's what Amelia Wattenberger, product lead for intent from Augment Code, says in this post,
quote, every documentation first initiative in software has failed for the same reason. It asked
developers to do continuous maintenance work that nobody sees and nobody rewards, end quote.
Augment's answer is bidirectional spec maintenance.
Agents don't just read the spec, they write back to it.
What happens when an agent discovers an existing auth context?
It wires into that and updates the plan.
If agents can write code, they can update the spec.
This only works when the agent actually understands your entire code base.
That's exactly what Augment's context engine is built to do.
It opens the door for specs that get more accurate over time, not less.
Learn more at Augmentcode.com or follow the link to the full blog post in the
companion newsletter. Cloudflare's new code mode technique. MCP is, for now at least, the standard
way AI agents use external tools, but it sure does fill up the model's context window with a lot
of cruft. To combat this, Cloudflare came up with code mode, which rhymes, so you know it's good.
Quote, code mode is a technique we first introduced for reducing context window usage during agent
tool use. Instead of describing every operation as a separate tool, let the model write code against a
typed SDK and execute the code safely in a dynamic worker loader.
The code acts as a compact plan.
The model can explore tool operations, compose multiple calls, and return just the data it needs,
end quote.
As a result of this, Cloudflare created a new MCP server for their entire API.
Quote, with just two tools, search, and execute, the server is able to provide access to
the entire Cloudflare API over MCP while consuming only around 1,000 tokens.
The footprint stays fixed no matter how many API endpoints exist.
End quote.
This is a good example of what I've been talking about a lot recently.
The models are getting marginally better, but the traditional software engineering around the models
squeeze out huge wins by equipping them better and better.
The only moat left is money.
Here's Elliot Bonneville.
Quote, every morning a few thousand people wake up and ship something.
A tool, a SaaS, a newsletter, an app that does the other app does, but slightly differently.
they posted on Hacker News.
Nobody clicks.
This is not new.
What's new is the scale.
An AI can wake up, or whatever it does at 3 a.m.,
and ship 12 of these before breakfast.
Creation used to be the scarce thing, the filter.
Now attention is.
Most of us are on the wrong side of that trade.
End quote.
Elliot says the effort it takes to build something
is trending down, but the time we collectively have on this earth is fixed.
In a world where attention is at a premium and slop abounds,
attracting attention to your creation,
means you better have a head start or a lot of money or both.
Quote, the uncomfortable version.
If you're not already moving, you might never take off.
The cost of acting like this is true when it isn't is,
is you move fast and spend money you didn't need to spend.
The cost of acting like this isn't true when it is, that's permanent.
End quote.
Elliot's take is more dumery than I believe is warranted today,
but I can't blame him.
He shipped something new last week, and he's trying to attract attention to it.
That's the news for now, but I like to take a time.
moment to sincerely thank each and every one of you for reading, listening, submitting links,
commenting, and just supporting my work all these years. It's been something special. Have yourself
a great week. Reach out and stay connected with me if you like my work. And I'll talk to you again
someday.
