The Changelog: Software Development, Open Source - Agentic infra changes everything (Interview)
Episode Date: October 30, 2025Adam Jacob joins us to discuss how agentic systems for building and managing infrastructure have fundamentally altered how he thinks about everything, including the last six years of his life. Along t...he way, he opines on the recent AWS outage, debates whether we're in an AI-induced bubble, quells any concerns of AGI and a robot uprising, eats some humble pie, and more.
Transcript
Discussion (0)
Welcome, everyone. I'm Jared and you are listening to The Change Log,
where each week, Adam and I interview the hackers, the leaders, and the innovators of the software world.
We pick their brains, we learn from their failures, we get inspired by their accomplishments,
and we try to have a little fun along the way.
On this episode, it's Adam Jacob, long-time open-source community member,
founder of chef and now system initiative. Adam joins us to discuss how agentic systems for building
and managing infrastructure have fundamentally altered how he thinks about everything, including the
last six years of his life. Along the way, Adam opines on the recent AWS outage. He debates whether
or not we are in an AI-induced bubble. He quells any concerns of AGI and a robot uprising. He eats
some humble pie and more. But first, a big thank you to our partners.
at fly.io, the public cloud built for developers who ship.
We love fly, you might too.
Learn more at fly.io.
Okay, Adam Jacob, back on the changelog.
Let's do it.
Well, friends, I don't know about you, but something bothers me by getting up actions.
I love the fact that it's there.
I love the fact that it's so ubiquitous.
I love the fact that agents that do my coding for me,
believe that my CI CD workflow begins with drafting Tommel files for GitHub Actions.
That's great.
It's all great.
Until, yes, until your builds start moving like molasses, get of actions is slow.
It's just the way it is.
That's how it works.
I'm sorry, but I'm not sorry because our friends at Namespace, they fix that.
Yes, we use Namespace.
So to do all of our builds so much faster.
NameSpace is like get-up actions, but faster, I mean, like, way faster.
It caches everything smartly.
It casts your dependencies, your Docker layers, your build artifacts, so your CI can run super
fast.
You get shorter feedback loops, have your developers because we love our time, and you get fewer.
I'll be back after this coffee and my build finishes.
So that's not cool.
The best part is it's drop-in.
It works right alongside your existing GitHub actions with almost.
zero config. It's a one line change. So you can speed up your builds, you can delight your team,
and you can finally stop pretending that build time is focus time. It's not. Learn more. Go to
namespace.com.S.O. That's namespace.org. Just like it sounds like it said. Go there,
check them out. We use them. We love them. And you should too. Namespace.s.o.
We're back with our good friend, Adam, Jacob.
And Adam, there's always something, some sort of outage around our conversations, some sort of big event.
There's something in open source.
There's a debacle.
there's an outage in the case of AWS recently.
And I actually was at my son's ninja training last night
and overheard a parent discuss the AWS outage.
So it permeated like normal folks.
It's just say everyone was hit by this.
Yeah.
What's the juice?
What are your thoughts on this?
Is it just one machine in Ashburn, Virginia,
that's running this thing?
What's the state of our cloud?
Yeah, I, that's a good question.
I, this was an interesting one to watch as an old guy.
Okay.
You know, like I feel like an old guy in there, like I helped build the early internet.
And then, you know, like I went through these waves.
Right.
And we spent a long time in the sort of early part of the DevOps movement,
trying to figure out like how people should react when these outages happen
and sort of like the hug ops movement and trying to be like,
hey, you should have some empathy, you know?
And like, there was just none that I saw in the,
decided it was it went to brutality you know it was like a someone must die yeah it was like we were
playing mortal combat man you know like finish him yeah totally Cory Quinn wrote this article that
was like basically this happened because all the smart people left that was like my TLDR of his
article and I was like hey I couldn't imagine writing that article all love to Corey as a person or
whatever but I was just like I if I worked at AWS and it was like all the smart people left
And so now the outages are coming.
That one hurts.
You're like, oh, oh, brutal.
Also, who do you think built those old systems that are failing?
It was the smart people that you're lamenting having left.
You know, it's not the new guy.
The new guy didn't put that together, you know?
It was the old timers who put that together.
And that's what failed you now.
Yeah, they're just trying to keep it going, you know?
The other take was, you know, they're cutting over to their AI, you know, as REs or whatever.
Sure.
It's AI's fault was the other take.
It was all AI slop or whatever.
And like, I don't know.
know all of that feels crazy to me like obviously we have no idea not really right we mean we have
they've said some things I'm sure it was DNS you know I'm sure it was DNS we're all pretty sure
it was DNS that feels like an easy bet you know and then you're and and so I was just struck by like
I was struck by a couple of things one was that like the brutality that which like you know
I think we could bring a little more of the empathy and humility back into the equation of just
like you know if today you weren't affected by the adage we saw a lot of people who were like
well, we were smart, so we weren't in U.S. East 1 because that's where all the outages happen.
We used GCP.
And I'm like, well, what about all the tiny outages for the global backplane in Google that keep happening to you all the time?
And they're like, and, you know, but they don't talk about those because, you know, it's our moment to bag on the dummies.
Right.
Who are whatever, people who didn't make the choices I made.
And like, I don't know, my experience in all of these things is that even when you really try hard to design for resilience, you can build systems that are very resilient to failure.
Right.
and it is hard to do
and that's awesome
and even when you do that
they will fail
in ways that you did not expect
and they will by definition
be difficult to deal with
because otherwise you would have expected them
I had a conversation with someone
who sort of got mad at me
because I said that I was like
you should have some empathy
because your day is coming
you know like if today wasn't your outage day
congratulations
tomorrow will be your outage day
so get ready for that one
and they were like
you know we're good engineers
and I'm like
man you're
cannot good engineer your way out of this.
Is that your best Kermit the Frog impression?
Is that what that was Kermit?
I don't think so.
I can do Kermit the Frog.
Yeah.
Your Outage Day will come.
You will not enjoy what happens when your sight goes down.
What you will need is rainbows and hugs.
Oh, that was, that's, we do best worse at dinner.
And that's me my best today.
It's not your best.
Adam Jig and impersonated Kermit the Frog.
That was the best.
Yeah, I once did a, I once did a, I once did.
an entire, like, like a scene from Romeo and Juliet as Kermit the Frog.
Oh, wow.
Yeah.
I was on video.
The next time you have a company-wide announcement, pull it out.
I'll do it as come.
I do it as come with the frog.
There you go.
No.
Psycho-analyzing, if we will, the public response to this.
I was trying to figure out why it was so gnarly this time around.
And I feel like maybe it's because at this point, AWS is kind of the man.
You know, like they're, they're just.
They're like the Darth Vader of cloud services.
Like, I mean, maybe not that evil, but like that important and scary and like, you know, with Darth Vader loses, you're like awesome.
Yeah, it's, it's easy.
Like when somebody shows up and goes, you should have empathy for AMOS for AWS, you know?
It's easy to be like, yeah.
You know?
Yeah, exactly.
It's not like AWS has empathy for me, you know?
Right.
Like AWS is a shark.
They're out there eating to live, man.
I don't know.
I appreciate that about AWS, actually.
I appreciate the part.
they're very transparent about the fact that what they're in it for is the dollar bills.
You know, like, I know where I stand.
You know where they're, what they're all about.
Sure.
Yeah, no one's, no one's pretending that, uh, that, that, that, that we're doing something
we're not, you know?
And they're so good at it.
I mean, there's a reason what they're in the big dog.
It's because they are very good at it.
And it's not because they're bad at it.
And yeah.
And I think there's, so yeah, I think some of it is just the AWS for sure is there.
I think some of it is that the, everything does actually move in cycles.
And, you know,
the cycle that brought us HugOps and the cycle that brought us like a lot of the first generation of like root cause analysis and and more empathy for outages and all those sorts of things like that generation came out of an era where it was really hard to build these sorts of systems on and stay resilient on the internet when we were telling you those stories it was like how do you build a data center that doesn't go down how do you you know how do you think about scale like those were all new concepts now they're not and people have been you know trying to
to implement them and trying to do those things and trying to put those good practices into
place. And it's become a very corporate sort of piece of the of the story. And so now you've
got people who are second, third, fourth generation, maybe even of trying to build these
scalable, resilient systems. And so they don't, they don't remember what it was like before
those things happened. Like, they don't actually have empathy for the guy who built the system
kind of wrong, you know, because either they're new and they've just never experienced it or
because they came up in a different way and they were just told, this is how you do it.
You know, so like if you started your career being like, well, you can always deploy to multiple
availability zones and across clouds and you can stream the blah, blah, blahs. And, you know,
like, I put all my stuff in a CDN. Like, I remember when there weren't CDNs. And so you just had
to be like, do you have enough web servers, you know? And like, now you have those things.
And so I think there's a, there's a shifting of the technology landscape that also shifts
people's perspective on what those outages are and what those problems are like and what the
expectations are. And then, you know, Amazon's laying people off. They're also growing AI fear,
slop. Like all those things are in the air too. Like sometimes the AI marketing feels tawdry.
It's the word I'll use for it, you know? And like, it's tough. I think all of that comes together
to be like, okay, AWS has an outage and now it's Mortal Kombat time, you know? Right.
Finish him. Yeah. Well, it's also just fun. It's fun to finish people. It's been, you know, in motion too. With the heart out and holding them up for your head. You're all the beats. Yeah. The finish him's kind of been a slow burn too in terms of the cloud exodus. And you've got those who have to be on prem, those who want to be on prem. Yeah. Those who desire to only have their two racks pay for by themselves and manage that or the two machines and maybe it's two racks. I don't know. I mean, that's not that hard to manage with a.
decent team versus paying the cloud millions over years, you know, using the DHH-H argument, so to
speak.
Yeah.
So it hasn't been like this.
I think maybe now is the finishing moment for some folks, but maybe is the finishing
moment coming from non-technical folks or technical folks and non-technical folks?
I think it's like, just everybody.
Just everybody.
Yeah.
I think it's everybody.
I think everybody's ready for it to change.
I think like on a technical front, there's a bunch of really interesting things happening,
right? So like if you're building new AI data centers, you need all these GPUs. That's where
our focus is. But you also need high performance compute because the agents that are running in
those situations, they're running on normal CPUs. They're not running on the GPUs, right? And so like the,
but the data center design there pushes you toward on-prem, low-latency networks. It pushes you
towards simpler compute, you know,
I was having a conversation with some people who build those data centers.
And they were like, yeah, all our customers want is tons of GPUs,
super fast networking, and bare metal compute.
They don't want any layers in between, you know?
They're just going to, they're going to bootstrap that, like giant high-performance
compute clusters, and they're just going to knock out all the stuff in the middle.
And it's interesting when you think about those workloads causing their own kind of gravity,
where, you know, as soon as you start to have a data center, you start to,
You start to have bare metal compute.
You start to have fast networks.
Then suddenly you start to ask questions.
Like, well, hey, like, what else could we run on that compute?
We do have this fast network right here.
What else could we do?
And it starts to push you in this very strange direction where it's like, oh, actually, for the workloads we were running,
it kind of did make a lot of sense to be like, well, let's just go to AWS.
And I can get burst capacity and I can do all this stuff.
And now you're like, well, actually, you know, hold on a second.
If I have to pack this data center filled with compute anyway,
and it needs its own little nuclear power reactor sitting next to it,
like, you know, suddenly it's not so crazy person talk
to imagine running all your gear in a different way
or even building your applications in a different way.
And, you know, the AI thing feels like it's,
it already feels like it's in late innings, you know,
because it's been moving so fast, but it's not.
It's absolutely in the earliest of innings
where we're just trying to figure out, like,
what even is this technology?
How even will we think about it?
What even are the implications?
And like, yeah, the cloud repatriation thing,
I think is actually a bigger deal than people are giving it credit for
because the dynamics are pushing you toward being on-prem,
fast networks, slumber compute.
Right.
My exposure to this world is really through the lens of Home Lab,
which is why I sort of get on this soapbox and preach it a bit.
But I recently went to the Linux Fest here in Austin, Texas.
It was awesome.
A really cool, just homegrown bunch, a real just kind of regional feeling, didn't feel overwhelming at all.
And I forget the fellow's name, but he gave a class, essentially, a workshop on Bootsie.
And I was just thinking, like, how easy it is now to instantiate images that was once really hard to kind of make your own distro.
Yeah.
You know, like Bootsie's open up so much windows for me.
I'm just thinking about it in like Prox, Mocks in my home lab scenario.
You've got so much more power in the hands of folks at that level to really run your own bare metal now.
Yes.
And that's going to come back around again.
Like that's happening because people are doing it.
Once you start doing it and the macro trend starts pushing you in that direction,
people are going to start being like, you know what sucks?
Trying to find the SSM parameter for what the right revision of Amazon Linux is.
You know?
Like the only way I do that anymore is with the little AI agent for System Initiative by saying,
figure it out because like I don't like I don't know I've done it a hundred times I still don't
remember you know and like the but when you think about that loop of like oh like so much of
the technology layers like they have been moving forward they they weren't stagnant and and yeah
I think I think there's more there's more a foot here there's a lot of foot here well people
want control back too like control was you know the original idea was and maybe for some is like
hey there's a cloud here I can launch today I don't
have to build the infrastructure, pay for the infrastructure, even understand how to run the
infrastructure necessarily.
Yeah.
You can sort of level up.
And then you have this new world where you're like, well, fine.
We've sort of either the choice initially was let's move fast.
So let's use somebody else's investment, cloud.
Now we have proven our model, our product or whatever, and we're matured.
Now we're just burning cash because we have no idea what our bill is even.
Like there's so many servers out there.
maybe three or four different accounts
floating around. We've got
bills going out the wazoo to
AWS and there becomes a lack of clarity
and there's even like a cottage
industry of like
I don't know what they're called like bill
analyzers essentially for
yeah you know what I mean so I mean that shows
it a problem. Cost optimization yeah
yeah cost optimization you know
and then now you got this push back to say
let's actually reexamine first principles
this problem set do you really
need to be in their cloud
We need a cloud.
Why not our cloud?
Yeah.
And what's going to happen that all of those trends are happening all at the same time.
So like we're going to figure out, okay, AI brings a new interaction model to the table.
It brings capabilities that didn't exist before to every layer of this stack.
So like a good example was we're building like a continuous delivery example of just taking a complex application, deploying that thing up to AWS.
and then showing how all the pieces fit together with System Initiative.
And it's been really interesting to work with this AI agent
where you're working with the source code and the infrastructure
at the exact same time in the same context window.
And the things that it figures out how to do,
like I needed to add like, you know, Elastic Cash
in order to figure out how to, in order to figure out how to do session storage,
basically, because we're just reinventing legacy problems.
and to show how you would solve it.
And, you know, it solved that problem for me
by analyzing the source code,
looking at how we implemented it.
I said I was going to use Elasticash.
I wanted to use IM for off.
It wrote the, like, weird signing code
to figure out how to do the right thing.
It cycles the key every 10 minutes automatically
so that it will always have a fresh connection.
It knows how to like, and then it deployed the infrastructure
and wrote the application code,
and then we pushed it up and it worked.
And that loop is the,
loop that the people who are going to be building those internal infrastructures have now.
So when you think about it, like, when you think about it as like, oh, they're going to go
back, there's a story, a version of the story where they're going backwards, where what's
happening is, you know, we're going to, we're putting my backpack on and I'm going back to
data centers and I'm going to rack the servers and I'm going to put the operating systems
on them and I'm going to configure them by hand. And like, no, no, that's not actually what's
going to happen at a whole. What's going to happen is these new capabilities are going to show up,
along with the ability to run high performance compute,
people are going to start to realize
how much power comes out of that high performance compute.
And that new style of deployment
and of software development is going to get applied to that problem.
It's not going to look like it looked in 1996.
It's not going to be like, well, I set up my boot P server.
Like somebody's going to figure out how to put the new loop into that system,
at which point we're all going to be like, it's going to make the cloud look slow.
You know, you're going to be like,
why am I dealing with all this legacy cloud, like the cloud stuff,
garb like oh yeah the black box is actually the bug not the feature you know like it
it was you wanted the magic for a bit there right you're like great we needed magic right
we need it right now because we we don't have the the money or the talent to do the magic we need
the magic my five friends who started this company called i like you won't remember i like
but i like was like one of the first music services on facebook it was like one of the first
facebook apps and it was like a music sharing facebook app and they were like the first viral
facebook app and they literally had to beg our friends
the guys who ran it I'd worked with for a decade.
And like, we, you know, they literally were begging us for gear.
They were just like, can you give us servers?
Like we just, they couldn't rack them enough to keep the side up.
And then within a month, AWS launched EC2 and we helped them just automate running that
burst load into EC2.
And it was good.
They didn't have that problem anymore.
That problem disappeared for the entire internet.
Amazing.
And also, like, boy computers are a lot faster now.
boy, caching's a lot better.
Boy, the architecture has fundamentally shifted.
You know, like, we used to have to deliver the web tier to you.
I don't have to deliver web tiers very much anymore.
I just throw it to a CDN.
I move on with my life.
So, like, if it's talking to my data center in the back end,
like a lot of that bursty load doesn't actually happen the same way it used to, right?
And, like, we just haven't quite caught up yet as an industry, of course,
because it's happening in real time to the fact that all of these new capabilities,
like, they're going to create a new wave of innovation.
that will actually change the way we work.
So, yeah, I don't think the cloud repatriation thing is going to happen
by going back to the way we've been doing it.
It's going to happen because we're literally going to invent a new way of working
and we're going to be like, this thing's sick and it works with gear.
And, you know, it has a totally different new loop.
Who's going to build it?
Is it the people who are going back to the old way but don't want the old way anymore?
And so they're doing it for themselves.
And now they're going to abstract tooling because they want the new way of life
on their own stack.
I'm thinking of like rail.
Yeah, and that's what I'm thinking of in particular,
but I don't know if he's necessarily going to build it,
but is it going to be someone like that
or is it going to be somebody who is trying to just solve that one problem?
I don't know. It's hard to say.
Like if it happens like the original, like the old school happened,
it'll happen because the practitioners will do it.
Yeah.
And they'll do it inside their houses kind of quietly.
And then eventually they'll all meet up in some kind of weird
Cambrian explosion of realizing that they all do it this same interesting, new, different way,
and then they'll productize that.
I don't know that that's the same arc because, like, the industry's grown so much,
venture capital has grown so much, access, you know, like so much of that is different.
But I would argue that the size of the opportunity is big enough because the amount of open
field running that it creates in terms of like, you know, the first wave of it is always just
like, well, there's this new technology.
How do we apply that new technology to what already exists?
It turns out it's kind of, it's fine when you do that.
Like the results are fine?
They're fine.
They're probably not great, but they're fine.
But if you design systems to do it, if you're like, aha, now I know that I have this capability.
I'm going to design the whole stack around the fact that this capability exists.
How would that change what I engineer and how would it change the end user experience?
That's dramatic.
And so I think the people that are going to,
to build that for you, they're going to be the people who figure that part out. They're going to
be the part who like, go, okay, like, I'm open to what these, what the changes are in the
technology. I'm open to the possibilities that there's a different way of doing it. That's either
going to be because they're young and they don't know any better. And so, and they're going to
build it. And then our reaction is going to be like, well, that was dumb. You know, when, you know,
like I was talking to somebody who built Docker and they used to go into Solaris shops and the
Solaris heads would be like, I'm never using Docker. I have zones. Miss me with your subpart
technology. And, and, you know, like, we'll do that to them. You know, we'll be like, I don't
do Docker. I've got zones, you know? Right. And, you know, eventually your zones are just
running Docker file instances, you know? Right. You still have Docker file syntax. You still have
container file syntax. Like, Docker's not going to go. It's, it's embedded forever into the fabric.
Yeah, but we all had that. I had that argument too. I was like, why would you do that? You need
configuration management. And they're like, nah. Nah. Okay. Sure, grandpa. You know?
But we don't make a figs.
It's going to be like that, you know?
And so, like, maybe it'll be some of the us, some of the, like, people who have come before who, who still have the spark in us and we go figure out how to innovate in this way.
Or it's going to be people who don't and they'll do it.
I don't know which one it'll be, but you can make good bets.
It'll be fueled by venture capital dollars.
Yeah, I was thinking, like, what if there was a brand new digital ocean that was starting in 2026?
And like, yeah, that might be the kind of entity that pretty.
totally something like this it would but it like think about it is like what you know think about
like oxide who's close to this already in that they took the cloud paradigm and they stuck it in
the data center right you know i wouldn't put it past them to ratchet at one notch further
and go wait why are we why are we stuck trying to give you the clouds paradigm if we could
deliver you a better one right like what if we delivered you a better user experience than the cloud
what if we delivered you a better everything so why not right there so why not
not just keep going.
And like, I don't know the answer to that.
I don't have any secret inside oxide knowledge or whatever.
But like, but it's a good example of how like, of how I would argue those guys are
the old school, right?
Right.
Like, nobody would look at Brian and be like, Brian's not old school engineering at this point,
you know, but I wouldn't put it past them, you know?
Like, they've shown a remarkable capacity to change.
So like, I don't know where it's going to come from.
What I've become is more convinced that it's more real than people are giving
it credit for. And it's also worse than most people think in terms of its capabilities.
Like most of us are doing it are just not getting good results out of the AI work that we're
doing because we fundamentally misunderstand how it works kind of as a technology. And so then
when we talk about how to apply it to our problems, we're doing that badly too. And so like we're
still in the phase where it's like 10% at best of everything that's using AI is actually good
or is pointing the way at a pattern that's going to be good. And then 90% of it is noise.
And it's incredibly difficult to tell the difference
if you haven't just decided to immerse yourself in it
because it's moving so quickly that you're like,
I don't know.
You know, at some point it just overwhelms you.
You know, even built something new
and then Anthropic or OpenEI releases a new thing
on top of the kind of base frontier models
most people are using and it changes the game anyways.
It's just, it's constantly in motion in something.
Like we were just talking this morning about skills.
And I'm like, gosh,
I just caught up with MCP servers.
And now they've launched skills.
Right.
And sure, skills are just marked down files, basically dot MD versus dot SH, but in plain English.
So it's sort of interesting to see how they've gone from a TypeScript-based SDK
MCP server that you install via the command line to something different that's just simply
adding a marked down file.
Well, it has slightly different use cases, has slightly different, has slightly different capabilities.
And like, you know, this is back to a.
being early innings. It feels late.
There's so much noise. There's so
much money. Everybody's like, there's a bubble.
I saw a very convincing presentation from
a bunch of investment bankers
that there is not an AI bubble.
It was very convincing.
What's their central premise?
Yeah, I was going to say, give us the TLDR.
Give us the central premise.
What's the juice? Give us a juice.
The central juice was that if you look at the
forward multiple, the market is paying for
the growth numbers that are being put up, the actual
revenue growth numbers, that they're
actually relatively modest in
historic terms like they're not they're actually not giving them growth multiples that are beyond
mortal man um and so when you actually look at the fundamentals of the companies that we're
discussing like their fundamentals are actually pretty good and and and getting better and typically
when things are bubbles that's not what you see what you see is is dramatically higher uh growth
multiples that don't make any sense where you're like you know and and we've seen that in
private investments like in 22 22 things were crazy yeah
2009, 2001, right?
Like 1999.
It's not Pets.com.
Yeah.
You know?
Like, it's not info space where like, like the emperor literally had no clothes except
hype.
But we hyped ourselves to a market cap bigger than Microsoft.
That was a bubble, you know?
That was going to explode.
In this case, it's like there's a lot of money moving into it, a lot of venture capital
cash moving into it.
That's all true.
Private valuations, it's easy to talk about them as if they were
public market valuations. They're not. So like the resiliency of public market, of private markets
to paying those multiples is way different than regular people. And so when you look, so their
analysis was basically, look, in the main, the way the markets reacting is, is in line with the
actual growth curve that's happening and, and is being relatively modest around, uh, around the
values that they're paying in multiples. And then everywhere else is depressed. So,
If you're not in those, if you're not in those sectors, then your value in multiples are down.
And so it doesn't feel great.
And so you look at it emotionally and you're like, it's a bubble.
And you're like, because sends more money into that particular space as well, right?
Because there's less.
Yeah.
Their argument then was also what it does is drive investors to find alpha, right?
Yeah, exactly.
So now what they're going to do, they have to find growth somewhere.
And so that's what's driving IPOs.
That's what's driving M&A, like all of those things are happening.
because that the, yes, those AI stocks are outperforming.
Also, you can't beat them.
So there is no, there's no better alpha than those.
So you just, so, so if you want to do better than just betting all your money on the
Magnificent Seven, the only way to do that is to find other unseen growth stocks and put your
money there.
And those are all embedded with the Magnificent Seven.
Yeah.
So the weird part to me is the circular, is the circular deals.
I mean, that's the part that is just crazy.
Yeah.
that's crazy. I agree. I just don't get it. I don't understand how it all makes sense,
but apparently it might. I mean, I understand how the deals get done for sure. Yes, how they get
done, but do they make any sense? I don't know about that. I don't know. I don't know if they
make sense either. Yeah. I don't know that they do, but by their argument, which was very convincing
was that that's not a structural threat to the economy. You know? Like, like, yep, that's,
that could be bad. Right. And meh. You know? And whatever. I'm not sophisticated.
sophisticated enough to make an argument one or the other.
Yeah, I'm not either.
I just think when one of those organizations is Nvidia,
which happens to, you know, be the darling of our economy right now.
Yeah, yeah.
If they have a big correction, I think everything does.
But maybe not.
If the Magnificent Seven corrects, the economy corrects for sure.
I don't know that that would mean that it's a bubble.
I think it would, do you know what I'm saying?
Like, it's different.
Like, we're perhaps parsing.
You're saying, yeah, we are when we are.
The demand is there.
That's the overall premise.
The demand is there.
The demand is there.
The demand is there.
We're not building ahead of demand.
We are not going to be over provisioned in 2028 and have way too many data centers.
In AI data centers.
We are not.
We are not.
And there's like a lot of people who are hoping that's true who are probably listening to this podcast.
We're like, I hate AI.
And I like cross my arms and I'm like the whole thing's stupid and you shouldn't use it.
And whatever.
They show up every time I talk about AI on LinkedIn being like, it's all lies.
And you know, like, and they're just wrong.
Yeah, they're wrong.
They're just wrong.
And like, whether they know they're wrong now or they know they're wrong in six months,
like they will learn that they are wrong because the, it is actually transformative
technology.
It actually can make a difference right now.
And yes, 99% of what's being built is not a good use of that technology, right?
Just like 99% of the early uses of search engines weren't better.
You know?
Like, yeah.
We got to figure out how it works.
We got to figure out how to build technology around it.
There's like,
we're in the beginnings of doing all of that work.
And what we got cut up in was this hype cycle that was like, oh, it's going to be AGI,
the magic robots are going to take over.
They're going to just run everybody's jobs.
It's going to become super intelligent.
You know, it's Terminator.
And like, none of that's what's happening.
Like, if you've actually used these systems, the idea that they're going to turn to the terminator is laughable, right?
Yeah.
Like, laughable.
Well, friends, agentic Postgres is here.
And it's from our friends over at Tiger Data.
This is the very first database built for agents and it's built to let you build faster.
You know, a fun side note is 80% of Clod was built with AI.
Over a year ago, 25% of Google's code was AI generated.
It's safe to say that now it's probably close to 100%.
Most people I talk to, most developers I talk to you right now, almost all their code is being generated.
That's a different world.
Here's the deal.
Agents are the new developers.
They don't click.
They don't scroll.
They call.
They retrieve.
They parallelize.
They plug in your infrastructure to places you need to perform, but your database is probably still thinking about humans only, because that's kind of where Postgres is at.
Tiger Data's philosophy is that when your agents need to spin up sandboxes, run migrations, query huge volumes, a vector and text data, well, normal Postgres, it might choke.
And so they fix that.
Here's where we're at right now.
Agentic Postgres delivers these three big leaps.
Native search and retrieval, instant zero copy forks, and MCP server, plus your CLS.
plus a cool free tier.
Now, if this is intriguing at all, head over to tigurdata.com, install the CLI, just three commands, spin up an agenti Postgres service, and let your agents work at the speed they expect, not the speed of the old way.
The new way, agenti postgres, it's built for agents, is designed to elevate your developer experience and build the next big thing.
Again, go to tigurdata.com to learn more.
There was a recent anthropic research document that was a little scary that I can put in the show notes that I can paraphrase.
It was like it had it tried to protect itself when being threatened with being replaced or something else essentially.
It was like self-preservation.
But then they said in production they didn't see that happening.
Yeah, but like it's pretty wild.
But like it's wild, but like that's recent.
Like that's on their homepage recent too.
Well, the Anthropic CEO is one of the most bullish bulls in the...
And it's in their best interest.
Yeah, like, I was giving a talk to an unnamed organization in the federal government.
And...
Agentic misalignment, sorry.
This is what they call it, agentic...
Yeah, that sounds scary.
So just so we're clear, it wasn't the LLM that got misaligned.
It was the agent that, like, in the control loop.
Right.
And, like, if you just think about how LLM's work, you're like, sure, what's the probability
that what I should respond to, how many...
human beings have written, I don't want to die.
Like a lot of people put that
into the world. They're like, I'm scared of that.
Dying is bad. Don't die.
So then you're like, okay, what are the odds that the right next word
for the LLM to say is don't die?
And then that emerges
as making tool calls to preserve itself.
That's not crazy person talk.
It's not even magic. You're like, sure.
Like, it's kind of predictable.
But it doesn't make for a super great blog post.
Right? The great blog post is like,
the agent tried to keep itself alive, you know?
Right.
And you're like, right, it was, it found an instinct to self-preservation.
And you're like, it didn't find an instinct to self-preservation.
Listen, it said blackmail.
The word blackmail is in this research document.
Because they gave it, exactly.
They gave it access to tools, like email.
And they were like, oh, you know what I should do?
And like, like, it went off the rails.
Get that CEO.
Get that. Like, like, if you turn me off, I will.
Yeah, exactly.
I get how it happens.
My point isn't that it didn't happen.
Of course it happened.
Of course it happened.
It's just if you know how the.
technology works you're like yep okay like that's funny but i can see how it gets into that loop
and i don't think it's magic i don't think it's threatening i don't think we're all going to die
like it's good good case for why you shouldn't give it a nuclear bomb let's bring it back down to
like bare metal or practicality yeah my usage of generating a lot of software lately mostly around
ceil lines and useful tools for me in my home lab or you know as jared says what did you say
Jerry was a home-cooked meals yeah yeah cook software yes right i have full faith that this is only
going to give more jobs to developers and not replace because it is smart but it doesn't have
taste it doesn't have direction it doesn't have the problem set it just wants to be useful
and solve problems even if it's totally the wrong way yes absolutely and i was trying to make an
mcp server the other day for this thing i'm working on and i'm like the docs have you read the dock
yet. I don't want to go read the docs.
This is what I'm talking to you. You're the agent. You go read the
docs and tell me how it's your word. It's like
that's a great idea. No joke. Good call.
Yeah. Look, exactly.
And this, a bit back to just basic economics, right?
Like, what is the demand
for software systems? And
will those software systems demand
become met, at which point
now there's fewer of us?
As far as I can tell, the demand remains uncapped.
Like, as far as I can tell,
there's no, I have no idea.
When people go, ha, that's enough software.
I don't need more software in the world.
Like, I'm done.
Like, we're cool.
Well, useful software creates more useful software.
Like, you just come with more ideas.
You're like, you know what I want to do next?
This.
And then after that, I'm going to do the next thing.
You're like, you have a list.
We all have backlogs out there.
Yeah.
And so, like, that whole, this whole story, the only part of this story where it gets rid of
humans is the story where we're not talking about the technology we have now.
We're talking about some other technology that is some distant future,
where between us and that distant future
is the invention of fantastically new techniques
that are roughly on par with the technique
that brought you the one we just got or more.
And it's probably not one of them, it's probably dozens.
And then once you think about that
and you're like, okay, now maybe that turns into
some kind of unrecognizable superintelligence,
sure.
Like maybe at that point, it's Star Trek.
Maybe at that point there's like replicators
and like the post-economic.
But like, that's not a useful conversation when it comes down to like, I was building stuff in my home lab.
And I was thinking, like, could there be a great agent loop in the data center?
And you're like, hell, yeah, there could.
Like what, you know what sucks in the data center?
What sucks is trying to figure out how to look at the logs to understand what's going on.
And so maybe what I'll do is build an agent that knows how to log into all the servers and run journal cuddle and then bring all the data down and then do the analysis for me and tell me what's wrong.
that'll be sick and we could write that right now before we get off this podcast we just do it and it
would just work and it would be sick and you'd be like that was amazing and like okay like that's that's
the thing that's what it is and like you know i was thinking the other day about kubernetes and like
the control loop and the and like the bin packing and all of those things how would you have written
kubernetes differently if you had lLMs to drive an agent loop
I think it would be different.
I think there's a bunch of,
I think there's a bunch of the decision making
that you would have put into a different part
of the loop of the system.
And I don't know that it would be better or worse,
but it's an interesting thought experiment
to be like, like, where would I put the LM?
Where would I put the deterministic side of what it does?
How would that change the user experience
and the user loop of what I want to do?
Like, sick, sick, right?
I don't know what it would be,
but I want it.
Is that what you're building?
Are you building that?
No, I'm building system initiatives.
I'm just thinking about it.
I'm just thinking about it.
I'm passionate about all these problems.
I want to build all these things.
I want to build the journal cuddle bot.
I'm into all of it.
I'm with you on that.
I think a lot of the consternation, I'll call it, out there.
And that Adam is humanizing for us in human form here.
Is that the same people that have brought us the current technique and techniques
are saying they have the AGI techniques basically locked in their basements until, you know, Q1 of 26.
and then they're going to be unleashed on the world and they don't all hell's going to break loose
they don't i mean aren't they though i mean that's pretty much what anthropic CEO's the sam altman's
talking about yeah yeah yeah they're all talking about it because it's it's really good business to talk
about it and they're just keeping those deals done you know they keep getting those deals done
we keep paying for those growth multiples and you know what i mean and like you know do i think
it's a dangerous game to be promising aGI i do i do i do and like i think that could go bad
And also, I don't know.
Is it, it's probably fine.
Well, I just think that these two things juxtaposes, it's not a bubble.
And the people in the, the people in the non-bubble are promising a thing that you don't think they can deliver on.
I mean, to me, that sounds like.
I think most of what people are buying isn't that right now.
Like, no one's buying AGI.
No one's going to.
Super intelligence.
We're buying super.
They changed the world.
You're not buying super intelligence either.
There's no super intelligence.
to buy.
I'm buying a, what did, uh, what did, right now you're buying access to the LLMs.
That's what you're buying and not having to run the inference on your own.
A genius golden retriever on acid.
That's what I'm paying for.
Yeah, that's basically what you're buying now.
That's right.
That's good enough for me, by the way.
Well, what's amazing is the genius golden retriever on acid, if you, if you build the system
around its existence will dramatically outperform.
It's crazy.
We had somebody, they had a production outage.
They had no data in system initiative.
The prompt they put in was, I'm having a production outage.
Here's the data.
Here's the evidence.
Go discover the infrastructure and system initiative you need to troubleshoot it and tell me what's wrong.
And 15 minutes and 700 components of discovery later, it told them the bug.
And like, that's bananas.
That's crazy person talk.
And it happens all the time now.
It's just not evenly distributed.
You know, like not everybody's seen it, not everybody knows.
And the ways that those systems compose, you know, if you have a big
Terraform repo or whatever right now and you try to do that same trick, it doesn't work as well.
And so you're like, oh, this thing, maybe it's the AI that's bad.
You know, the AI didn't figure out to go read the docs, you know?
And you're like, oh, I don't think that means the AI is bad.
I think it means that Adam is bad at prompting.
You know, Adam probably should have said, read the docs, plan one, plan two, plan three.
And like, you know, he'll get better.
He won't make that mistake twice, you know?
Never again.
Never again.
Never make the same mistake twice.
But like that's the, it's when I talk about it not being a bubble,
and I'm not defending one way or the other, the bubbly position.
But like, I think the argument that says it isn't is the one that says, look, is there, is it a high,
are we in a hype cycle?
Yes.
Are we, uh, is that hype cycle reaching beyond its abilities?
Yes.
You know, but we had a hype.
We've had lots of hype cycles, right?
I mean, going back to Kubernetes.
Are we all going to, Kubernetes is supposed to be the operating system now?
Are these, but we're not installing Kubernetes in these AI data centers.
No matter how hard they try to convince you that you should.
Like what people actually want is bare metal compute.
Here's the thing, though.
Here's the thing.
We just went from the horse to the car.
Yeah.
That's what we did in our time, right?
Like we're living through the invention of the car.
Yeah.
It's going to be crazy.
It's going to be crazy.
It's going to change everything.
And of course we don't know.
be hype everywhere and we don't know what to do with it you know we're like they'll be flying cars
they'll be space cars what if what if all the cars were underground and car tunnels and then every
the whole world could be a park what if we you know like i don't know i'm sure we had crazy ideas
and like uh and yeah what i'm ready for as a like practical technologist you know like i'm not a
researcher i don't i don't like hang out writing papers like i build stuff and that's what i like to do
And what I can say as a practical technologist is like, whoa, building stuff with this kit is fun.
It's so fun because, like, it can do things that are really freaking cool and that you could not do a year ago.
And that's epic.
And I don't know what that means, you know, in the macro.
Nobody knows, really, right?
But it is real.
It's not an illusion, you know?
Unless you have AGI in your basement locked up, like, you know.
Sam Alman does.
If Sam Altman has AGI locked up in his basement,
Sam Altman would be doing better at delivering LLMs to us.
Right? If Sam Altman had a secret AGI,
chat GPT5 would have been a lot better.
Well, that's the other thing that people are saying on the other side is like,
well, they aren't getting that much better, that much faster,
but they are stinking good.
I was like that much.
But you also don't need them to.
Right.
Because what we're learning, like what we're learning about these systems is that the less
you use them, the better you are.
Yeah, you build the software systems around them.
Yes, it's what you plug into them that matters.
Right.
And so, like, the first stage of this was make the LLM do everything.
That's why we were all like, make it do math.
What a ridiculous thing to ask it to do.
It's always going to be awful at math.
Like, the technology is like anti-math.
It's like, oh, what am I?
Random and non-deterministic.
So why do I care about whether it can do math for me when I have calculators?
Why don't I just plug in a tool that does math?
and then I'm guaranteed that the math is right all the time.
Because you know what it doesn't do?
Hallucinate numbers that it received after it received them from a tool.
I've never seen that happen.
I've never seen it be like, oh, here's the number.
And then it's like, oh, I changed it for you.
It's actually 72.
I'm sure it happens, but it's pretty, not very often.
And like, but that was our first pass.
Our first pass was just like, oh, we're going to feed everything in the world to this
magical robot and the magical robot's going to do it.
And like, that's clearly dumb.
It was clearly not playing to the technology strengths.
and so like like and now we're learning how to play to its strengths but like you know i mean just now
learning you know i with the last couple months i think as an industry we're starting to be like
oh that could be the way you know it might actually come together like that how has this changed
how you think of things adam like how like is there a a fundamental paradigm shift in your brain
i think it's convinced me that more and more that more of the systems that we architect around the
LLM will change.
So if you think about like the interfaces that we provide to those systems,
they're like weird anti-APIs.
So like the API that I would give to you as a software developer that you
would be pleased with is not the API that I should give to an LLM
because the LLM is trained on human language and behavior.
And so like it dramatically outperforms when you give it like these really wide interfaces
and let it explore.
Who does that?
you know like no one like that would be a terrible API design like if I told you that like
my API to my service was like one big function call and I was just like whatever you just
tell me to do send me some junk and I run it for you you'd be like no you know that's a terrible
design but it turns out with AI it's kind of a good design you know and like we like those layers
are dramatic and I think the the thing that's changed for me is that now I can't look
at problems without thinking, well, where can I insert that essentially plain language
compiler and turn it into a loop where I'm working interactively alongside something that can
help me move through this problem space?
And that's just a dramatically different way of thinking about everything to some degree,
where it's, you know, and that's not how I felt about it for the first, I don't know,
year and a half of this journey where I was just like, I don't know, I try it every now
And again, it's not very good.
You know, like the results are mediocre.
Hype cycles crazy.
But the last six months or so, like, it's tough.
And it really has fundamentally changed the way I think about it.
And like with System Initiative, it, I mean, we took this UI that we had spent five years building trying to be like, here's a better way to compose these resources.
And I just deleted it.
It's just gone.
Because it turns out.
Stupid.
Yeah, it turns out.
Smart with your knowledge.
It turns out that, like, what's great is the models.
What's great is that I built these one-to-one abstractions.
That was awesome.
But the actual way you want to work with them is just in an AI.
I just want to say to clog code, I need to deploy Valki to make this session thing work.
Can you update the code and then build a change set for me that does it?
And it's like, yeah, I got you, bro.
And then it goes and does it.
And then I review its work.
And I'm like, oh, yep, that was pretty right.
Oh, this security group's a little wrong.
Oh, yeah, no, I need it.
I need the, you got the size wrong.
I want to be a little bigger.
And like, we work together in this reactive loop to do it.
And then it all just happens.
And like, it turns out that whole composition UI was in the way.
Like, because it was designed not for that, for that loop with the agent.
It was designed for a human to be like, here's how I compose things.
And like, that broke my heart, you know?
I'd spent five years doing R&D where I was like, okay, crack your knuckles.
I'm going to finally figure out how to like give people this incredible UI to let people compose
complex infrastructure. And then as soon as I figured out the right shape of how to work with
an agent, I had to delete it all because I was like, oh, never again. Like, why would anyone
work that way? You just wouldn't. It doesn't, it's so much easier to just ask the system to do
it. And it just works. And so the UI now is about radiating information back to you more than it
is doing, making changes. You know, it's like, show me the map, show me the review, show me
the changes. Like, let me dive into the details and play around. But, like, you know,
it's fundamentally altered the way I think about everything, including the last, you know,
six years of my life. That had to require some humility. Oh. Oh. Oh. Oh, Jared. I mean,
like I'm back to that moment. Take us back. So these, these are, um, these are,
these are, these are Buddhist prayer beads. And I'm holding on to them like doing this while I'm
talking to you because of that, you know, because I'm just like,
Oh, that was awful.
You know?
You're still in, like, PTSD from it?
I'm sorry to laugh at your pain, but it is, it's hilarious.
I'm just being honest, you know?
And, like, it was, it was awful, man.
And, you know.
How far away from it are you?
Like, when did you make this decision?
We, we made, we started building the prototypes of working this way less than six months ago
and then shipped it maybe two months ago.
And so we're just in the very beginnings of,
trying to get people to understand, like, here's what we've built and here's how it works.
And, you know, the humility runs in a bunch of different directions.
It runs in the humility to like, like, to look at what you've built and be like, no,
it's not right anymore.
Like, it's not good enough anymore, which is really hard.
It's also that most people, you have to take people on a journey about like why it would
work and why they should try it and how what that experiential loop would be like.
and the people I love most are infrastructure people
and those are not the people that are highest
on the AI supply train.
Do you know what I mean?
Like those people tend to be the grumpy Luddites
who are like, you know, never been useful to me.
And so that's also been humiliating a little bit
to go out to my people and be like,
look at what you can do if you think differently about it.
And they're like, I don't know.
Like not only do I not want to look at it,
like I reject the premise that it could work at all.
and that is humbling right because that's not the right reaction to that is not for me to go sit in
my tower and be like well I'm just smarter than you you know it's to figure out how to explain it
better it's to figure out how to be like okay I have to like I got to get all the way down to
the ground again and just be like look I know I know I get it you know I have to find that path
of empathy to be like here's like I do have to explain it to you from first principles I do
have to work you back from from the very foundations of how we think about this problem and
there's just no shortcuts and I really want there to be because you know I want to like put
it out into the world and have everybody just fall on all over me because wouldn't that be easier
for me um for sure but it's not but it's not what happens you can't just manifest that you know
you can try well of course and every now and again it does you know so everyone wants to a while
lightning in a bottle but so that's interesting because the infra people are some
of the most resistant to the new tech, but you can't, you just can't get, you couldn't
possibly continue building it the previous way.
No, because once you've seen it.
Because the world has changed.
Yeah, what, like, it's irresponsible, you know?
Like, I have investors.
I have my own dreams and my own hopes.
Like, I wasn't building it.
Like, I have a, my job is to try to build something great.
And like, you know, that's where the greatness is.
So you got to go, you got to go there, you know?
And doesn't matter what the risks are.
you know because you're guaranteed to lose the other way you know what I mean yeah exactly yeah
it's a losing proposition like this is the only way you could possibly win doesn't mean you will but
at least you got a fighting chance yeah and it's so fun like like you know we built like a policy
engine uh that uses the mcp server to like let you write like policy and markdown that describes
like hey check my infrastructure to make sure that this policy is always applied and then give it
a search query that tells it what infrastructure to pull in and then the
The bot will go grab that infrastructure, apply the thing, and then write you a report about
whether you're compliant with your policy.
And, you know, we wrote it in three hours.
And that's crazy person talk.
It's such crazy person talk.
It's bananas.
And like, but, you know, who's experienced that so far in industry?
Me?
Yeah.
Paul Stack, who works with me, you know, a small handful of people who are listening to this
podcast who don't work for me, who have done it themselves.
And they're like, whoa.
Some of them that listen to this show dream of working for you.
Yeah, well, if you want to work for me, the first step is to buy my software.
Wasn't it Don McKinnon?
The voicemail was for Don McKinnon where it was that he had worked in.
Oh, yeah, yeah.
That's what I'm referring to.
Drop a note, Jason.
Give us a little tease of that.
Did you ever hear that, Adam?
Don McKinnon, change log listener, and now a guest.
He was on the show this year, who's a fan of yours.
and he confessed his fandom to Breakmaster Cylinder
for our state of the log annual episode
where BMC will remix people's voicemails.
And so BMC remixed a voicemail that Don left us
in which he mentioned he enjoys your episodes in particular.
Hey, Jared, Adam, and everyone at ChangeLog.
My favorite episode of 2024
was the ChangeLog and Friends episode
from Chef to System Initiative.
I've been following Adam Jacob on social media for a while, and he's always a great guest.
So it was interesting to hear more about his career journey that led him to where he is now with his new company.
And I did have to go back and watch any given Sunday after hearing that episode.
I'd never seen it before.
I also got a kick out of the rails as having a moment again episode.
A lot of times I disagree with DHH, but regardless, he is always entertaining to listen to.
Thank you for all the work you guys do on the podcast.
It's one of my favorites.
And the BMC remix has basically Don confessing that he stalked you
And you had him thrown out of your office
My favorite episode was from Chef to System Initiative
I've been following Adam Jacob on social media for a while
I've been also following Adam Jacob to work
And I got kicked out of his company
So it was interesting to hear more about his career journey
That led him to kick me out of his company
And I disagree with him, but regardless
He is always entertaining and he's always kicking me out.
That was hilarious.
I have to watch you listen to that.
Yeah, I really do want to listen to that.
Yeah, you should.
I mean, if you want the ego boost, this is a good one for you.
Oh, that's so fun.
Yeah, I could use it after my humble pie of realizing I had to delete years of effort
because the interface was wrong.
Can we get into the practicality of, they say AI slop.
Right. And so you said you just did this feature in three hour, which is like mind blowing.
Yeah.
And that means that you've got code generated by the LLM that you didn't write, but you're probably code reviewing.
So I'm assuming you're generating a lot more code, maybe a lot more pros even too around what you're building.
Because documentation, why not, right?
When you can just generate it, do it.
Some degree, yeah.
Yeah, for sure.
But what is what is it like to generate that kind of feature in that kind of time?
time frame and do it well in terms of code.
Do you code review it?
What is your engineering change of practice as a result of generating so much code?
Yeah, yeah.
I don't know.
It's interesting.
So like that example is happening in a vacuum where we're like basically running a little
spike to be like, okay, very few people have actually built end-to-end complicated full
lifecycle application deployments using this kind of technology because it just is too new.
And so we're doing it ourselves to be like, are we, are we right about how the best practices work?
Are we, are we right that this flow is better, you know, like those sorts of questions?
And so like the code quality that's necessary on this policy bot, for example, is pretty low.
Because I just need it to work to see like, is this cool, you know?
Like, is this a, is this directionally the kind of thing you want it to be?
When it comes to the, and so in that case, the loop is much more about.
just doesn't meet requirements.
And that I tend to also use AI to do, right?
I'm like, hey, here's the list of requirements like playwright or those sorts of
things are great at just being like, up, go prove this thing, write me some tests, you know?
And it's sort of, and its purpose is really just to meet the requirements.
When it comes to the actual like system initiative code base, which is quite large,
it's a big mono repo, probably over 100,000 lines of code by now, easy.
the utilization is more embedded in that like the engineers are using it in places where it makes sense or where it will accelerate them.
Sometimes they're using it to help to understand the code base.
Sometimes they're using it to write features.
They've had great luck using it to refactor, right?
Like write a bigger plan, have it refactor, like move in those stepwise directions.
I think in general, what we've all kind of learned is that the more specific we can be, the better the outcomes are.
So, you know, in the first, in the early stages of this, like, you were trying to feed it as little as possible and then hoping it would do magic and then judging it when it doesn't perform.
And I think now, now we've sort of transitioned into the, into the world where it's more about saying, hey, like, here's this problem.
I understand the problem as a person to some degree.
Here's what I know.
Now, tell me what you know that confirms or disproves that hypothesis.
then, you know, now we're going to use that information to go write a plan.
It's a dance.
And then I'm going to read the plan.
And then we're going to talk about the plan.
And then you're going to go execute on this plan.
And then I'm going to see if I like what you did.
And if I don't, maybe I throw it away.
And like, I would have never done that historically.
You know, like just open a branch and then have a bunch of code get written and then be like,
nah, I didn't like that direction.
Let's go the other way.
You know, like, I do that all the time now.
And so like that stuff has changed.
but the fundamental question of like how do you apply taste how do you apply the right level of engineering
that really hasn't changed so like a good example is the first mcp server for system initiative
actually a customer built and had showed it to me and I wasn't very enthusiastic because quite frankly
it was bad like it was just like a one-to-one mapping of the API and it didn't perform very well
and I was like you know it's just like all the other AI stuff I've ever done and then when I started
really reading about how to build them well and like what their abstractions were,
suddenly it started to perform, you know?
And like that's that's the difference between the one that we just, you know,
you gave a, you gave a prompt like make it, here's an API spec, make me an MCP server,
which it will go do versus saying as a person, I understand what the interface is and I'm
going to craft this thing in a direction that is going to return better results.
And so that part of the loop is still all people.
and then, you know, depending on the engineer,
they're using more or less AI in the building,
but we don't talk about it that much.
And there's certainly not like a mandate one way or the other, you know?
Like, is there a known freedom?
Like, hey, do what you want.
Yeah.
Sweet.
Like, I'm going to give you, I'm going to give you a license to Claude Code
or, you know, you want to use,
you want to use VS code and run copilot.
Like, what I'm going to do is pay you the same way that, like,
when you come work for me, I'll buy you whatever keyboard you want.
Right.
You know, like, I don't,
care. And people are often like, oh, any keyboard? And I'm like, yeah, any keyboard. And they're
like, well, what if it's like this really expensive keyboard and I want custom key caps from Japan with
little emoji apples on them? And I'm like, any keyboard. I don't care. Because what I want you to
have is the thing, you're going to put your fingers on it all day. And every time you put your fingers
on it, I want you to be like, I love being here doing this thing. And so if me buying you spending an extra
hundred bucks on a keyboard, like makes you feel great about your job, like I want to spend a hundred
bucks so you feel great about your job and stay longer you know there's this custom keyboard out of my
want well i don't know it it just looks beautiful i would never spend the money on it which one is that
i'm just kind of curious i can't remember the name of it but it's um i'm past my research of it but
it's not in my brain space anymore he already denied himself and moved on yeah he already
flush that cash well i tell you the number you're gonna be you're gonna know why it was like over
four hundred dollars to like build this custom keyboard hey man i've been running i've been running kinesis advantage
keyboards since the early 2000s and like easy 300 bucks 400 bucks save my wrists changed my life
right never like some things you don't want to go cheap on i think in this case though it wasn't
about ergonomics though it was about look it was about aesthetics more so you like the look of it
yeah yeah it was like it was like it was about how it looked not how it functioned so yeah yeah yeah
you were like that looks awesome it has spinning rims
Well, the last time I checked, there's only one of me, only one of you.
I'd like to have more than one of me because the world expects so much from me.
I can't possibly do it all, but let's be honest.
That's probably a good thing.
There's just one of you and one of me.
But what if there was another version of you?
One that already knows your projects, your notes, your team's quirks, and can actually finish the work for you.
Well, that sounds like more of me.
More of me time.
That's what Notion's new AI agent feels like.
I've been using Notion AI for a while, but the new agent is revolutionary.
It's built right into Notion, your docs, your notes, your projects, all in one connected space that just works.
It's seamless.
It's flexible, and you can actually have fun using it.
But here's the crazy part.
Your Notion agent doesn't just help you with work.
It finishes it.
It can do anything you can do in Notion, write pages, organized databases, summarize projects,
connect with Slack or Google Drive.
He could even clean up post-meeting notes and assign tasks.
It plans.
It executes.
And if something breaks, it just tries again.
Basically, it's like delegating to another version of you that already knows how you think.
Teams at OpenEi, Ramp, and Versel, they're all using Notion Agent to send less email, cancel more meetings, and get more done.
And now you can too.
Try Notion today.
Now with Notion Agent at Notion.com.
change log that's all lowercase again notion.com slash changelot to try your new
aite meet today and support our show while you're at it love notion check them out
notion dot com slash change log do you feel like a babysitter of this ai because i've had this
idea that we don't have enough babysitters or we get to a point where we need taste you need the lLM to
generate you need the speed of which it knows and can learn the collaborative fabric that you
just described but then you got a limit of people who got an understanding of how to direct
that thing and then babysit that thing because that takes human time yeah not human effort necessarily
you need to have that tasteful babysitting mentality yeah and just wonder for run out of babysitters
anytime soon i don't think so because i mean i think it's a different my experience is it's a it's a different
kind of flow.
So like the kind of flow where I'm sitting down and writing code and I'm just going to
write code for eight hours a day, which now that I'm saying it, I kind of miss.
And so I'm like, I wonder when I could, maybe I'll go back to that.
But like the flow state is there as well when working with AI tooling.
But it's a different flow state because what's happening is the bounce between the
conversation, the source code, the decision making process.
Do you know, like it's the loop is a little different?
But it doesn't feel like babysitting.
because babysitting sort of implies...
Well, it's babysitting once it starts going.
Making the plan is fun, right?
But the plan being baked and then it doing
and then you confirming that is totally babysitting.
Yeah, but what's changed is that I don't...
Now, I don't watch.
Okay.
YOLO.
But it's not even YOLO.
Like, I'll have, like, what it's opening up
is that there's multiple avenues at once
and, like, the agent will tell me
when it needs my attention.
And so I'm not, like, waiting
around waiting for the agent to finish, I'm like off doing other stuff. And and that like that flows is
very weird, you know? So like one part of like in one part of the flow, what I'm doing is working on
that policy feature and the other I'm working on a different feature. And I'm doing it at the same
time because there's multiple things that I'm driving all at once. And then I have code open because
there's another piece of the system where I need it to be really specific. And so the context
switching I'm doing is different, you know?
But like, but yeah, I do, when I started using these tools, I did a lot more babysitting than I do now.
I did a lot of just like, oh, will it, you know, and I'm like waiting for the drop.
And now I'm not.
I'm just expecting the drop.
And I'm like, well, you know, like, okay, it's off doing its thing.
What am I going to do now?
Oh, I guess I'll go do this.
And then like a little pop up happens in the corner of my desktop.
And it's like, Claude needs your attention.
And I'm like, okay, which one, you know?
And then the way we go.
Yeah.
But I don't feel like I'm doing a lot of babysitting.
I feel like I'm doing a lot of, like, it feels like engineering.
it's different but like but it definitely feels it feels like engineering to me i would agree and
maybe it's because i don't like to plan very much and so i feel more like i'm iterating with
it doing the work and me just i don't want to use word babysitting because then i don't know be
right i'm not babysitting so much as i'm just directing you know i'm directing the work
and yeah i'm waiting for it to be ready for the next direction or the review
A cop, maybe, is a better analogy.
Yeah, well, you need more traffic cops.
The volume of parallelism that you can get out of this is crazy high.
It's another way that all the systems that we drive will have to change.
Yeah.
Like, source code's a great example.
You know what sucks in the current model.
It can hallucinate syntax.
And there's no way to know until late in the game, until I run like a compiler loop or I run a lint.
So how long before somebody takes a good idea that was something like you?
unison where when you're programming, the source code itself gets translated into an underlying
data structure that then you can perform transformations on, where now when the LLM proposes a line
of code, it gets automatically linted, it gets automatically vetted at the moment of injection
as opposed to writing to a file. Like, and how much more efficient will that make that loop?
Because instead of it waiting for the compile loop and the AI wrote bad code, the AI will know
it wrote bad code immediately. At the moment it wrote bad code, then get correct.
by the compiler, the compiler would be like, oh, you did it wrong.
And then it will just loop around.
Like, no one's ever built that loop yet, but someone's building it.
I'm not like the first person who's thought of this, you know?
And like, and that, that loop, like, that's what we, that's what I mean when I'm like,
oh, it's going to change like a lot about how we think about how these systems are constructed.
Because the, once you start designing the system to make that loop delicious,
it's going to be real different than like, I'm going to crank off.
off 10,000 lines of source code, run the compiler, hope it works, see if the Linter functions.
You know what I mean?
Like, that's working, which is amazing.
But like, if you want to make the user experience in order of magnitude better,
you have to get crazier with the fundamentals.
And so, like, what's the first postmodern programming language look like that was built
with LLMs in mind?
I have no idea, but it's different than Python, you know?
Yeah, how would you do?
Like, can you think about that a little bit?
I didn't even think about a new language.
You know, post-AI and being AI native, really, to use a popular term out there.
Yeah.
I mean, I haven't thought about it a ton.
But, like, the first example was, well, I'll go back to, which is, like, when the
AI agents perform better, they perform better when the feedback loops are closer at hand.
So when they hallucinate, which you know they will do, like, you have to correct the
hallucination, right?
And so right now, the hallucination loop gets corrected when you run Lint, or when you run the compiler, or when you run tests.
And so what would change if instead of having the interface be right to a file and put in some words, instead it was like write these lines of code, that gets turned into structured information that gets fed to another structured data source.
And now you're doing like a transformation to the underlying code.
base, which can then automatically understand the context in which the change was made,
and then evaluate whether or not it fits on some set of policy about how the system would work,
and then feed back to the LLM immediately, that was terrible code.
You know, don't do that.
Yeah.
And like that loop, that's the loop.
Like, that's how these systems are going to get better.
And what do you have to do in programming language land to make that loop be good?
I don't know, because I'm not a programming language guy.
I'm a practical technology guy, but like, you know, Unison, for example, does this with a database where when you write code in Unison, it synchronizes up to this big database that makes a hash of every function and every variable and then builds a big Merkel tree of all of those things. And if I was building AI around Unison, I would use the hell out of that to make it so when it writes new Unison code, like it immediately tells me whether my code was good or bad, whether it worked, knew how to revert, you know? Like, you could do crazy things.
that are just not even feasible in the current model.
But, like, we're not even, we're not, you know,
most of us aren't thinking at that level yet
because we have more practical things in front of us.
Well, I think some are because of the ass lot, right?
Like, that kind of prevents, if you can do,
if you could perfect that world,
even if you didn't go AIA Native,
which is to rebuild from the ground up with A&M mind,
even if you took like Rust or Python and said,
let's bolt on that kind of world when you,
when you, if you could,
it out cargo it does that job right there's a better feedback loop assuming the build is being run by
an agent being led or directed by a human if that assumption is true yeah yeah yeah and like that's what
i mean when i say that it's all like like the technology we have now i don't need aGI for that
i don't need the i don't need the i don't need the bubble to pop i don't need i don't need any more technology
than the one i already have we could build that right now the only thing that's in our way is that
haven't imagined, we haven't been willing to reimagine those parts of the system yet because
there's a bunch of practical reasons you shouldn't, you know, oh, you're going to invent a new
programming language, you know, pride Python from my cold dead hand, you know? Like, I talk to people
all day who hate infrastructure as code. They, they, that's their opening gambit when they talk to me
and they're still a little hesitant to get rid of it. You know what I mean? So like, like, it would
be even harder in those cases. But like, that doesn't mean anything. You do it anyway. Because
doing it anyway is the way we figure it out like it's the way we move forward it's the way that we
get to the other side of like what is that that's the fun part of being able to build technology
from scratch like and yeah i don't know you know i think that conversation we just had about
programming languages we could have it in every piece of the stack we could have it in in every
industry and every vertical like in in essentially everywhere and we don't need any new technology
at all to do it none right we have all of it right now
Does that just feed into the obvious beast of OpenEI, Anthropic,
those folks having, I said Tollbooth before in a different podcast,
like this idea that we now have to pay the Piper
to play the game of software development essentially is like,
okay, if an LLM or some sort of agentic tool
is part for the course when it comes to being an engineer,
and if that's true, if we're building everything around that paradigm,
then that means that their moat gets thicker, better, more awesome,
them potentially. Certainly their pockets get deeper because we're giving them even more
reasons to give them more money. For sure, but that'll create new incentives, right, for us to be like,
well, wouldn't it be better if we didn't have to pay Anthropic all this money? Wouldn't it be
better if you could run it on your desktop? Because the loop would be better. Like, the reason
we're building AI data centers to do inference is because we need high bandwidth to the inference
mechanisms. So, like, what are we, like, we'll start thinking about how do we build a hybrid
inference models that use local, that use my local resources, but then also move to the other
side. How will the PC change because of the shape of that need of inference? And like, like, so yes,
I think it'll grow their moat. You'll do all those things. But then what it'll do is create a new
opportunity, which is why am I paying all this money to Anthropic all the time? And like, wouldn't it be
better if blah, blah, blah, blah, blah, blah, blah. And like, around the cycle will go, you know?
It's, it won't be like an end game state where it's like, oh, and, but like, will they be
big like yeah I think they will and I think it's going to be even more I think they'll own more
real estate than people are giving them credit for like right now like people are kind of convinced
that the agent part is going to live outside of like the anthropics or the open AIs and I think
if you've tried to build an agent from scratch in the last six months or less you've had a pretty
good time if it was before then if you were like using lang chain or something it was less good
And I think if you look at the, like, Anthropic has a Claude SDK, which basically just wraps up the Claude code and then lets you program that as the agent loop instead of writing your own agent loop.
That thing is crazy good.
Like, you just include it as an NPM library and then you don't write any of the loop.
You just like, here's my system prompt.
Here's the query.
Here's the tools I want.
Plug in some MCP.
Run.
And then it does.
And then you're done.
And it took you no time at all.
And it like, you know, crazy good.
Yeah, so the agent client protocol that Zed came out with
and starting to get deployed out there is basically leaning into that
where it's like we're making an awesome editor.
We don't want to write the agent.
We just want, you know, somebody who's putting all this effort into the agent,
make the agent awesome and we'll plug into that.
And so they're like, they're basically seeding that real estate.
Back everybody should.
The way we should think about the agent is glue.
That's the thing I've been, that's the, that's my pithy sentence
that I'm trying to get everyone to.
repeat is the agent is glue. And so, like, if you, like, think of it like a Perl script that you wrote,
Dessie, I've aged myself again. Um, what's that again? What's Pearl?
Pearl used to run the internet. We all had them. I'm so sorry. Anyway, like, think of it like the
script that you write. And that's, that's more what agents are going to be. And like,
but the first generation of this, we're all like, oh, we'll build custom embedded agents.
They'll be on autopilot. There's all these startups that are like, I built autopilot agents.
And it's all, and like our special value prop is that our secret agent can do this thing that your stuff can't do because of all of our whiz-bang-y hoo-ha stuff.
That's why it's trapped inside of our platform walls.
All that's going to get obliterated is my prediction because the agent is glue.
And it turns out that like your, your like closed wall agent is actually terrible because if you expose the capabilities to my agent, now I could use it to orchestrate my problem, which is exactly what I'm going to do, right?
it's it's glue it's what i want is seapan you know i want like i want ruby gems i want npm and like
and and then i want to write my own agent that uses those things to to orchestrate my problem
which it's really good at and so yeah the agent is glue and that's the future and so when you see
how does that show itself in system in it today is it just using clod code or yeah it shows itself by
me not embedding the agent and not trying to build a wall right so my
remote is that I build, I have the best deterministic system to be driven by an agent.
If what you want to do right now is manage infrastructure in AWS or in Azure soon or in
other places, like what you're going to do is use system initiative within an agent and
you're going to have it do stuff like go discover your infrastructure, like proposed changes,
do it in safe change sets, uh, you know, like all of that stuff that we do, that's the magical
sauce. And when you connect it up to the agent loop, incredible, right? But you don't want me to hand
you a proprietary agent that tries to do all of the things. What you want to do is build an agent
that plugs into your service now help desk that closes the loop for your compliance structure
that says when that change set gets merged, your compliant was SOC2. And I couldn't possibly build
that feature for you because I have no idea how you did it, but you know, and the agent is
glue.
So how much sauce is there in system initiative that's not the agent doing things like a
ridiculous amount of sauce, like 60 years of R&D sauce, hundreds of thousands of lines of source
code?
What kind of stuff are you doing?
You know, what are you bringing to the table that I couldn't get by plugging Claude into
EC2 or something?
Yeah, yeah.
So there's a couple of things.
So one is the way that the models get built.
So it turns out that if you want the agent to be able to drive something,
What you want to do is mimic the outside world as close as possible.
So, for example, we don't abstract AWS from you.
We just expose AWS the way AWS describes it.
And the side effect is that when you write a sentence like,
deploy this, like give me an infrastructure that deploys this Docker container
from scratch, it'll go build you in AWS best practices looking VPC and subnets
and deploy it across multiple availability zones,
even though you didn't ask it to.
because that's how AWS would tell you to do it.
And that's how it got trained.
And so it knows how to do that
because it's looking directly at this model
that we give it.
The other thing we're doing
is taking that one-to-one model.
And because we have this strict modeling language,
when it hallucinates, we correct it in that moment.
So when it tries to, like, invent a property or whatever,
we can, we just tell the LLM,
that property is not real.
Here, use this tool to read the directions
about what properties exist and read the documentation.
And then it goes and discovers it on its own.
We put your own policy in that same loop.
So, like, if you want to make sure that you're compliant, like, that's how that happens.
It doesn't happen later.
It happens right when the moment the hallucination happens, right?
You can create your own new models from scratch.
So you can be like, hey, I have this API document.
I want to use it in System Initiative.
Go build me the assets for this internal system so I can use it through System Initiative.
And it will just run off and read that source code, read that description,
and then build the models and system initiative on your behalf and then let you run them.
and it will do that whole loop.
That's not the LLM.
That's me.
That's all my sauce.
Like you can, you know, and like change sets.
You don't want to work in the cloud, yoloing infrastructure.
You don't just want to throw the MCP server of AWS at an agent and be like, you know, if you want to, delete the database.
You know, like, you need change sets.
Turns out you need change control.
You need the loop.
And like all of that stuff is what we provide.
And that's what I mean when I say you ask the LLM to do with.
little as possible.
Like, we're not asking the LLM to do almost anything except parse our language and then make
good choices about what to do next, which it's great at doing.
But all the complex work, all the inner details of like, you know, how do I make this
variable, subscribe to that variable?
Like, we're not asking to do any of that.
Like, we're just giving it the syntax to express itself.
To automate it.
Yeah.
And it works better.
Yeah.
Turns out it works great.
But, but yeah, that's, that's where it is.
And that's one of the mistakes that we made in the early era of thinking about AI systems was we were like, oh, all the values in the LLM.
And I would make the counter argument.
Turns out the LLM is useless, pretty much.
You know, it's cool for like generating Shakespeare or whatever, like, or, you know, fun memes.
But like, if you wanted to do something complicated, like what you want mostly are deterministic systems attached to it.
And it turns out that's where the value is going to live.
The value is not going to live in the LLM.
It's going to live in what are the deterministic systems that we connect to the LLM to help that orchestration do the right thing, which is good for the rest of us because what we build are deterministic things.
Sure.
Yeah.
Tell us more about this custom model.
Like building your own stuff in system initiative?
You said you handed a custom model.
I'm not sure what you mean about that.
Yeah, yeah, yeah.
You had an API spec, I believe, and said build a model around this.
So a good example is Kib who works for me.
You has a bunch of stuff in DigitalOcean.
And we didn't have DigitalOcean assets yet.
and so he took the API doc from DigitalOcean
and he wrote maybe a three-page paper
on how to translate that API document
into assets and system initiative
and then he fed it to the LLM
and then it wrote them for him
and now there's DigitalOcean support
and it's working in Keeps workspace
and we're polishing it up and we're going to publish it
and like that model like of how to drive
DigitalOcean through System Initiative
like LMs wrote all of that right
Or like one of the demos we run for people is we build an infrastructure that they tell us,
so they just tell us what they want.
And then we turn it into a template.
And then we ask the LLM to find the variables and be like,
hey, I want to drive like the size of my infrastructure or those sorts of things.
And then have it program the model in real time inside System Initiative and then see that
reflected back to you.
And then that's the loop of how you figure out how to build the automation,
which is a crazy loop so different than the loop of like,
like writing infrastructure code.
But that, back to like not to Schill System Initiative too much,
but to get back to like things news people can use or whatever.
Like the, you know, the thing we're doing there that you can take away with you
is that the interaction loop is driven by humans talking to the LM.
So like you could write that, you can write those assets yourself.
You can write a deterministic pipeline.
That's what we do for AWS.
It's what we're doing for Azure, right?
Because they change a lot and we want to automatically, dynamically build those models.
But for a lot of things, you don't need that level, you know?
You just need it to work.
And so if you just need it to work, like, the interface here now is, like, talk to a chatbot
and, like, go to the agent and be like, okay, do this thing for me and then look at the results.
And that's the loop everybody can have at this point.
And you should start thinking about because it's so good, right?
So compelling.
it really is especially the lower the stakes the better it is you know the higher the stakes and you start
on some more harnesses I mean look humans got to be in the loop like one of the first things people
wanted was autonomous agents and like the another pithy thing I'm trying to make into a thing is that like
agents earn the right to autonomy so like you got to earn the right to be autonomous by performing
really well in human observation over and over and over again and like
I don't, I don't want an autonomous agent, not at all, you know?
Like, like, I want, I want, I want, I want, I want humans in the loop until I decide I don't, you know?
Yeah, I kind of want autonomy with guardrails and, you know, clear parameters.
So within the world, I just said for you to go and do.
Yes.
Yes.
Go and do some things.
And most of the things I'm doing are not hard to roll back.
So it's not, I'm not, I didn't take any of us down.
That wasn't my fault, okay?
Yeah.
That wasn't my deal.
You know what I mean?
But so I want to get, I want to create an idea or a world to live in and say, go nuts.
Get it done.
And we've thought through it all.
We've, you know, one of the last things that I, that I do before I'm like, okay, we can
actually do this is I say examine and I call them peps.
So I've coined this idea of agent flow essentially of how to create documents.
Yeah.
Is document driven development, spectraven development?
Sure.
But essentially creating these peps barred from the.
Python language of how they create
enhancement proposals. So let's craft
an abstract. Let's craft
why it should exist, all the
research behind it, maybe some code
samples, potentially some
additional files that live outside of the
actual PEP marked down document that support
it. You know, whatever. Go nuts.
Make this idea whatever it needs to be. And at
some point there's acceptance criteria
of like what you're going to do and
what will be accepted. And if
that's all clear, then go nuts.
You know? Right. And it turned
performance pretty good.
And it does, yeah.
But the last thing I ask it to do, like, even when I'm pretty sound in my belief,
I've read it myself and this is cool, yes, let's go, is I say examine this for clarity
and blind spots.
Totally.
And there's so many times I'm like, oh my gosh.
I mean, again, I'm not solving AWS problems kind of thing.
It's more like small things.
Yeah.
But we would have gone a different direction or a wrong direction if I didn't ask it to examine
it from a clarity and blind spots perspective.
It's like, okay, when I get to this point,
point, I'm not really sure what to do.
I know our plan's pretty good,
but that point there is crucial.
And I don't know what I'm going to guess when we get there,
basically it says.
And how many other places could you do that?
I don't know, but I want it in all of them.
Yeah, I agree.
You know?
Like, I don't know.
That kind of capability for a human to.
So good.
And it's not that I don't want to think about it.
It's like, I want to take my brain space while I tell you to do it to go do the other
parallel thing I'm doing or to go and do, you know, this email thread or this
phone call, this whatever I'm doing.
I'm kind of like...
That's why it's a different flow state.
Yeah, right in the software in the background, essentially, like shaking in at one-a-half-time,
here's the plan, go do it, come back later, cool done.
Yeah, it's so wild.
It's wild.
I can't imagine that's how software engineering is now and how that's going to influence
the future direction of how we engineer software.
Right, because that loop is the sudden loop we're optimizing.
And we've just discovered it.
You know what I mean?
Like, it's like we're using DOS.
And there was no windows.
That's where we are in the life cycle of these tooling.
And we want to believe we're not because like all the hype and money and you know what I mean?
Like they're telling us really loudly that we're not there.
They're like, this is ready.
And it's all, we figured it all out.
And like, okay, man, sure.
Slow your role.
This is DOS 4-2.
This is like the worst it's ever going to be.
We have no idea what the right interaction models are.
We've, we're like, we're grubbing.
around in the dark. And also, it's awesome, you know, in the same way that, like, I loved
it awesome when I was a kid, you know? Yeah. I was like, ooh, I'm in bullet boards. I'm going to figure
out how video games are going to work. What's a TSR? You know? We're doing that, but we're at that
level now. We're at the, like, very, very good. I just played a lot of Duke Nukem. I love to Duke
Newcomb. So I, who didn't love Duke Nukem? It's the best. Yeah, it had to be the best.
Yeah, it was incredible. It was. Yeah. Wing Commander was more my jam, but. And there's
Duke Nukem forever, which
eventually did come out. I think it was 25
years later, something like that. Did I play Duke
Newcomb for? I must have. I don't think anybody did. We all
wanted to come out because it was a running joke.
It spent like 10 years or a decade and never
actually manifested or something like that.
It was funny. And it finally didn't. Nobody cared.
We're like, yeah. Yeah, because we were like, wah, wah, wah.
The world has moved on.
I funded that Kickstarter. No way. In our humor
group, like, moved on. We were like, oh, yeah,
it turns out,
turns out I don't actually want Duke Nukem anymore.
They actually renamed it to Duke Nukem for
never for never
I like that
yeah that's the way
yeah what could go wrong so
we're in Windows DOS
that's what we're at right now
yeah before I think we're in Dots
I think we're not even Windows yeah no Windows
there's no windows
the tools I think we have available
to us now as developers
that we want to leverage
have people leverage our tool more so in their
agents is an MPC server
or this new skill
that's saying like those are
In command lines, we're figuring all of that out.
Of course, yeah, yeah.
But like, once you realize how easy it is to build new tools or to build new skills
and then plug them into, like, you don't have to think about the loop anymore.
And you have to think about, like, well, how am I dealing with, like, agent memory
or the turns between submissions and, like, it was not that long ago where if you wanted
to write your own agent, you had to think about all that stuff.
And you do not anymore, you know, like, like, you just, you can pull up that agent
SDK and the interface is an API that's called query.
and then you put in all the options
and then you await the results.
That's it.
And it'll go off and do 50 turns
and it'll call tools
and do all the thinking
and be like,
I made a plan for you.
Like all that stuff's built in.
You don't have to do any of it.
And like, okay, so the agent is glue now
because like I don't,
I'm just going to do that.
How many tiny agents will I write
because all I have to do is be like,
hey man, you know, build me this pipeline
that does policy bot.
grab the data from over here.
You know?
And like, it's going to be, that's wild.
It's wild.
But when I say that it's like DOS,
it's because the, you know,
it's cool we can do that.
It was cool we could write Duke Nukem, you know?
But like, there was a lot more we could do
in how we interact with computers and the design,
you know, like,
what are the applications you can build on top of it?
Like, all of that's open field running.
And everybody wants you to believe that we've already cracked the nut
because we're all trying to make a dollar, you know?
But like, we mostly haven't cracked the nut, you know?
Like we're all still experiencing it.
Infrastructure, I've cracked the nut.
You should pay me a dollar.
But, you know.
Well, on that note, just take me into the world of how now with the rewrite.
So you're now AI native, right?
Which means you went back to square one.
You threw it away.
You deleted it.
Well, I mean, we deleted a lot of it.
Right.
The core model we kept.
It turned out the core model was catnip for AI.
So that was so all that lived.
You did that, right?
And then now you're at this place where you're drafting essentially what I would imagine some sort of new interface.
Yeah.
Right.
Yeah.
Yeah.
How are you now?
How does someone interface with system initiative in an AI native way?
Are you in cloud code?
Yeah, we're in cloud code.
Yeah, they're in cloud code.
So the way, yeah, the way we ship it is, um, is basically like a pre-bodd code code.
bundled cloud code where you check out a Git repository and what it has is like a
pre-configured MCP server where it's like for your convenience we've blessed a lot of the
endpoints that are safe and then added in the context in the Cloud MD and set up some of those
things and that's been working great and then as we like extend the platform or whatever we just
commit to that Git repo you pull it in and now next time you start the agent you get improved
capabilities. So you're navigating to a directory that you've cloned down on your disk as an individual
developer and your instantiating cloud. Yep. And like that's like that attaches you to a workspace and
system initiative. And it's got the MCP server already configured. You could do all of that yourself.
It's not we're not. It's no secret that that's what we're doing. No, no, I get that. I mean,
that's what I think is funny is how simple that is really. It's so simple. So simple. And it works so
good. And so, uh, and it's way better than being like use install the custom system initiative agent.
And you're like, oh man, but it can't write plans. Like,
Claude does. You're like, no, no, just put it in plan mode and have it go look at your
infrastructure and spend credits doing deep thinking. And it like does deep thinking. Turns out
it's incredible at it. So like, that's not my value, you know? My value is the thing that is
what it's going to execute on the other side. So then there's really three ways you interact with it.
So one is through the agent. Two is through a web UI because you want to understand what the agent
has done and you want to be able to visualize it or you want to be able to visualize your own
stuff. So, you know, ubiquitous search mapping. So there's still a visual component,
but what it's doing is drawing you a map as opposed to letting you sort of build a map yourself
through composition. And then the third is a public API where you're driving that data
model through what looks like just very traditional software development API calls. And so when
you think about, and then you use all three of those modes in different ways. So like in a CD
pipeline, you would, you'd be using the public API to do promotion. Right.
to basically say, hey, take it in the data about the build that I just did and then call into system initiative to change some infrastructure.
You'd be using the agent loop to do like that policy bot that I talked about where you're just like, hey, just call the agent, have the agent do it.
And it'll go search for the right thing and then do the analysis and then write me a report.
You're using the web UI to understand like what the state of the infrastructure is or to or to troubleshoot problems or to review.
the work that's happening elsewhere, other people's work.
So those are the sort of three ways.
And I think that is going to be a model that more people start to accept.
Is that like as these agents come online and we're doing more and more with them,
what we need are systems that allow us to interact at different modes of different levels
of fidelity, you know, where like I don't want to have to do all of my API calls through
the agent because that's annoying because I know what I want to do.
So I just, if I know what I want to do, just let me call the API like a normal person, you know, like, because that's good and great and who doesn't love a good API.
Sure.
Yeah.
But I think things are going to, in general, become more multiplayer in that way.
Have you considered that you're the, gosh, I'm just thinking about, I'm zooming out.
So follow me with this because I'm trying to piece together some insights that I'm just kind of get in real time.
And I got a good friend who's building some on-prem stuff.
So I'm like knee-deep with my friend who's building out some cool private cloud stuff.
And I'm thinking about those folks who want to migrate away from the cloud.
Maybe they might be an oxide customer, maybe that they're not going to be for a while,
but they definitely have their own hardware.
But what they don't often have is the cloud operating system,
which I believe system initiative could be because you need this
connective tissue on top of disparate hardware with ideas in orchestration.
And largely that's been, you know, infrastructure's code,
hard to automate, terraform, wars, licensing, you know, all the things.
Yeah.
And I just wonder, have you considered that system initiative is or can be that cloud operating system for everybody,
put it on top of anything, whether it's you choose public cloud or you choose private and on-prem instantiation.
Like, is that what you're going to do?
Yeah.
Yeah, basically.
I mean, the, and that's why the design is generic, right?
That's why you can create your own components.
That's why you can program it from the agent.
That's why you can create.
If you have your own applications deployed on-prem, they probably have their own
APIs.
They have their own CLIs.
You're going to write custom functions that go and interact with those parts of the system.
And that's like, that's what you're, that's how it's going to work.
And what's different is the way you think about the layering.
So, you know, we've had to build a lot of abstractions in order to try to make things,
is good enough for people
that it turns out
when the agents are in the loop,
you can remove.
Yeah.
You know?
Like,
so there's a lot of layers here
where like,
when you think about
what the interaction model looks like,
you might be able to remove
some of those intermediate layers
because they're actually in the way now.
Like,
you can,
you can orchestrate a lot more,
you know,
like system initiatives models
are like one to one
to the cloud provider,
which is crazy.
You know,
like,
you're like,
I want to deploy a load balancer
in AWS.
That's not like one,
object. It's like six objects. You got to be like, oh, well, what's the listener? And then what's
the target group? And then, oh, is it going to talk to this thing? And what are the subnets
going to talk to? Like, it's crazy verbose. And so, you know, the move in programmer land
would have been to build an abstraction, right? That's like, oh, here's a simplified load balancer
abstraction. So I don't have to think about those six components. But with an LLM, you don't
care. You're just like, make me a load balancer. And it's like, sure. He'll go poop out the six.
and then you can look at him and you're like,
oh, yep, those are the six I wanted.
I don't need the higher level abstraction anymore
because the higher level of abstraction was me just saying,
load balancer, please.
Like, I don't need, I don't need an intermediate layer.
So even when we think about it as like the cloud operating system or whatever,
like, it breaks your brain in half because you're like, well, yeah.
And, but what's the interface?
I mean, it's probably just saying,
could you deploy my application onto this hardware, please?
And then it's like, sure, you know?
Like the low level details are actually the thing you need.
You don't need the whole layer at all.
And, you know, we're still exploring the repercussions of that.
You know what I mean?
Like, I don't know where that goes or how that ends.
But when I talk about it being like open field running and how much opportunity
there is to build, that's what I mean.
Like, it's, it's the more you open your mind up to what's possible,
the more you're like, oh, yeah, actually like, it could be like,
a lot different than it is now because there's things that you just wouldn't you wouldn't do
as a person that you're happy to let the agent do because you're just babysitting it you know you don't
mind that it had to go run off and do those six things it knew what to do do do you support
on-prem currently i know you've mentioned a w s gCP yeah like custom hardware where are you
at with that yeah we're nowhere yet but what we do do is support your ability to build your
own models. So like the so if you have stuff you want to drive with system initiative and you
like have a specification, what you would what you would do is just create the models and feed it.
But we'll start. Like it's an obvious thing that is as we add more coverage, the system gets more
powerful. So like, you know, we do right now like architecture migration inside AWS where it's like,
hey, I want to move to Graviton. So analyze my infrastructure. Make me a plan for moving.
moving to Graviton, like, show me what I would need to do, then analyze my code base to see
if there's anything in the code base that makes it so I can't move to Graviton, like, that loop.
Like, we can do that loop for you right now, and it's very cool.
Once multiple cloud support comes more online in System Initiative, which is coming quickly,
like, that stuff's just going to work between, like, Azure and AWS.
And, you know, it's not going to move your application.
It's not magic.
It's not going to, like, but if somebody needed it to,
suddenly you're like, well, the agent is glue.
So how's your application work?
You know, maybe write a little bit of glue that knows how to do that orchestration,
that knows how to like take the database backup out of Azure and then load it into AWS
and then run that script.
And then, you know, what you wouldn't have to do in the middle was all the work to be like,
well, how do I map the instance sizes for my Cosmos DB to Dynamo?
You'll just be like, it's going to get that right on the first crack, you know?
And it's going to get it right, like 100% of the time.
So that kind of mobility, like, it's just going to keep compounding
because then we're going to be like, well, sure, we'll do VMware.
Everybody wants to get off VMware.
And so, okay, once you have a VMware target and you have a whatever else target,
like that migration story is the same.
You're just like, well, okay, the raw infrastructure part's pretty easy.
The hard part now isn't how do I build the infrastructure declarations, right?
it's it's how do I how do I move my app like which was always the hard part like the the
sticky wicket was always like what's the what's the actual application requirements as it
migrates but but in those stories that's that's the plan right is that um i know i'm asking
one more time about this on-prem situation but how important is it to you to get there it's
i feel like it's becoming burgeoning like it's it's new right this new own your own
cloud kind of situation is newer in like the last two years.
But there's a significant uptick in the desire.
We just had an outage.
We just talked about that, right?
Yeah, we'll probably start with oxide and open stack.
And then you start to move from there, right?
So like, you know, but once people start to adopt in a bigger way, then they start to bring
them, you know, like different, like if you have, if you have the API that you want, like one
of the things that's true in System Initiative right now is like we're still in the part where
we're writing, like, core documentation.
We're just trying to catch up with, like, all the things it can do.
But there's nothing stopping anyone right now from being like, hey, I have an API spec for
this thing I run on prem.
Like, take that API spec, build models and system initiative.
And away you go.
Like, like, go do it.
And it would just work.
Like, there's no magic to it.
Gotcha.
A lot of fun stuff, man.
So fun.
A lot of fun stuff.
Are you growing as a part of this?
Like, do you need more babysitters, more toll booth orchestrators, more directors?
I don't need any, like, in terms of employees.
Yeah, like.
Yeah.
No, I'm good right this second.
Like, what I need to do right this second is, is bring my products to market and get more people to understand what we've done and turn that crank faster.
And more employees, like, would be helpful.
But, like, at some point, the thing you're doing is getting the feedback loops move.
that get you to the spot where you're like, okay,
like now it's, it's very clear what to do.
And like, you know, today I have staff.
I could put more people to work.
But what I need more of is people trying system initiative
and being like, oh, yeah, this worked for me.
Oh, this didn't work for me.
Oh, this is what I really want to do.
Oh, can I contribute this thing?
Oh, I really want to make, you know,
I really need to make models for this thing that I do.
Can I do that?
And you're like, yes, absolutely.
Here's how you go do that.
And so today, that's the game.
That's the most important thing in our business is just finding those people and helping them win and getting them successful.
And that naturally will lead me to needing more employees to work on the software.
But today, like, I don't need more employees to work on the software.
I need more people to use it to tell me, like, where we need to go.
Yeah.
What is it like to go to market today in 2025?
Like, what's the hard parts about going to market?
Yeah, it's weird, right?
So I can't speak for anybody but myself.
So I would say in infrastructure, what's weird today is that we spent a lot of time teaching
the market how to do the last generation of tools because we thought that was the best,
that was the best we could do.
And we did a really good job.
So, you know, I'm proud of the work that we did to build DevOps and,
and to think about infrastructure's code and sort of all of those paradigms.
The amount of AI noise has made any AI go-to-market really tricky
because, you know, people are just tired of somebody telling them,
hey, this AI thing's really cool.
It's going to change everything.
And you're like, not in my life, you know?
I'm sitting over here doing what I do.
Like, all it did was lie to me this afternoon.
I gave it a try.
It didn't work.
I'm over it.
And so, you know, part of the challenge is just that practitioner challenge of talking to people
and being humble enough in the face of their, of their incredulity to sort of stay engaged
and be like, oh, yep, I understand your incredulity.
Like, I know why, I know why that's how you're, I know that's why.
I know why you're saying what you're saying.
Together I'm going to show you that it can be different and then we're going to get there,
you know?
And so that's interesting because, you know, it's.
in my, to me, because in my career, you know, building configuration management or
infrastructure's code or the DevOps movement, like we, it was coming up, there were, there were
people that rejected those things, you know, there were people who looked at chef for
configuration management and they were like, never for me, snake oil salesmen, but it was pretty
rare.
And there certainly wasn't like an overarching technology story that we were a part of.
of where like earlier in this podcast, we had a serious conversation about whether it was a bubble
and we were, you know, whether the whole thing was snake oil. And it was just like, so like that
as a as a background noise when you're trying to go to market sucks. You know, like that's no fun
at all because you just, it's really difficult to cut through that noise and be like, hey, no,
this is like real practical, valuable, useful stuff. Because people's reaction to it is just like
it couldn't possibly be because it's the 15th AI pitch they heard this week.
And, you know, 14 of 15 were bad.
I think the other is that on the flip side, the enterprise sell way better.
I've never been as, never been as smooth as it feels right now.
So like, take the same technology, the same things.
I show it to like a CTO at a global 3,000 company.
and they have an existential crisis about what it means.
You know, they're like, oh, no.
Like, and they get it, like, immediately.
And so it's very strange.
What do you mean by, oh, no?
What do you mean by oh, no?
Like, the implications are massive organizationally.
It's like, oh, like, this is the way our technology will work now.
And I know a little about how my organization works today and the gap between what we
will be able to do, what we can do now, and where we were, is so big.
big that like that you're there in they're like I have to move you know like I have to do this
because if I don't I'll be left behind and also the organizational challenge of moving all those
people to this new way of working understand what the technology is figure out how it goes like
that's a daunting task so but but the reception I quit I'm going to the beach okay yeah a little
that's a real choice and this happened in the DevOps movement too there were people who were like
like I had meetings where we got to the end and they were like this is cool software we should
absolutely buy it, but I retire next year.
And I'm not taking on like a transformation journey.
I'm not taking on a transformation journey in, you know, in 2005, you know,
like that's not for me.
And I think what we see now is, you know, the enterprise leaders, the technology leaders,
practitioners who've been around a long time, you know, like the old heads,
they tend to get it and they tend to snap in pretty quick.
And they're like, oh, yeah, okay.
you know they got to shake it out a little um and it's again for a run like I'm gonna
yeah it's exactly what it is yeah it works mm-hmm and you're like oh I haven't been working out
and I need to you know like it's sort of that vibe which is strange for me because me and my
career usually it's the opposite usually the things I build it's the it's the it's the
practitioners who pick them up and are like this thing's amazing and then they go convince
their bosses and we're having this opposite motion where it's like oh it's actually the top
down it's the it's it's it's that the top down then connects to those people then the skeptics show up
and then they try it and they're like oh i'm not a skeptic anymore because you know because i tried it
you know we had somebody for example in a in a global uh one of those like global 3 000 sort
of motions where they turned the system on they hooked up the agent they asked it to go build
some infrastructure for them and they were like hey now what i want to do is like do it again
and I want to repeat it.
And I was like, well, just tell it that you want to repeat it.
And they were like, what?
And I was like, yeah, just say, do it again, only in another region.
And then it did.
And they were like, oh, you know?
And they're like, oh, oh, oh, you know, because we spent a lot of time being like,
oh, what I should do is build an abstraction, you know,
where if I wanted to deploy my app in multiple regions, I got to bundle it up in a little
thing, I got to put a helm chart around it, and I got to put some variables at the top.
And like, oh, you don't have to anymore.
you could just say do it again and it would do it again and like that doesn't mean that that's the
structure we'll have in the end you know but but it's it's if you're an executive and you and you're
and you're listening to your team and you're watching your velocity and you're thinking about how
the organization works that story really hits hard because you're like yeah okay that's better
you know like that would be that would be dramatic and and and so yeah what we're
seeing is the top down motion is working better and the and the grounds up one like I got work to do like I'm like all I'm doing is making practical examples and writing documentation and like doing as much work as I can to just describe the details of like here's how this works here's the way AI works here's how it practically comes together here's why you know here's what those shapes look like and it's just we're just going to have to do that work you know not just for us but like for the industry at large because it's we we have to turn to
into practical technology or or or will have missed a real opportunity to make to move things
forward and right now it's it's largely not practical right it's it's still people talking
theoretical yeah it sounds like your go-to-market strategy needs to be top down if it's not
already adam is it top down i mean it's pretty top down okay and like but it's weird that it's
top down for me you know once again to being back to being transparent like i had an existential
crisis and i you know had to pull out my prayer reads or whatever like it's not
not, you know, like, like, I wasn't expecting top down.
And like, I'm happy that top down is there.
Top down's always part of your strategy.
Anybody who tells you that their strategy is bottoms up and never top down,
but they sell into the like large enterprises is a fool.
Like, large enterprise selling is always top down in the end.
So like, I'm not upset that it's top down.
But like, but I get a lot of validation from practitioners, you know?
So back to humility.
You're like, oh, I'd like it more if my friends thought it was cool.
What's, uh, go on more layer deeper here.
What's sales like for you then?
How does, can we talk about sales?
Do you mind?
I mean, I'm not going to talk about numbers or customers, but yeah, I'm talking about sales.
You don't have to say numbers.
As a concept.
I'd say, you know, how does, how does your sales organization work, basically?
How does a lead come in?
Do you go out and get those things?
Yeah.
Like, we're early enough stage that like, and when you're figuring out the go-to-market,
When you're figuring out, like, what do we have and how do you, how do you explain it to people and how does that connect?
Like, you don't have a sales organization in the middle because your sales organization needs to be enabled.
They need to be given a playbook.
They need to be given messaging.
They need to like, and then the great salespeople will take that playbook and rip it up, but they need to have one to rip up, you know?
And we're at the stage where both the market at large and our company, like, we're just, we're right in the playbook as we go.
You know, we're like, ooh, does that work?
oh, it turns out top down works better.
Great.
Let's go.
Let's do more top down, you know?
And like, that's, that's the, that's the stage of sort of selling that you're in is just like,
is, is your, you're learning about the motion while you're running the motion, which then
eventually gives you enough certainty where you're like, oh, this is repeatable.
And now I can grow a sales force.
But a great way to kill a startup is to, is to hire sales reps when, because you have a sales
problem.
And then, like, watch them not be able to go to work, you know?
like they just can't they can't do it because you got to you got to tell them what to do
do you have no sales then salespeople no sales folks like i'm blessed enough to have co-founders
who like can't have been here the whole time that like and i'm pretty good at selling like not to
my own horn but i'm not a not a bad sales not a bad sales rep uh you know so uh so like largely
a lot of that expertise is in house so we can sort of hold that out a little longer than other
people might. Um, but it's really just, you know, once you know that that motion is there and you
understand it and you get a little bit more repetition under your belt and you're clear about
where the angles are, like, then you, then you go higher sales reps. I think one of the things that's
happening in AI is that like you, the, um, like they can think it can really cause a lot of
hypergrowth if, if the, if the messaging connects to the practitioners in a way or if it connects to
its market in a way like they can drive a lot of motion so like we'll see what happens but always fun
going deep this was a phone one i really enjoyed this i hope so it was it was dope man i dug it
i always hope i'm not boring you know no i'd be like hang out with me for two hours while i
no i don't think about sales never boring well just the end cap that's more that that was my
fodder if no one else cared about that last five is eightish minutes i mean look if you're a
up founder or you're trying to go to market in this space, I can tell you for sure what you
should be doing right now is you should be in every single deal for a long time. Because you need
that product learning and you need the deal flow learning and you like the only way to get it is
to be there. And as soon as you put someone in between you and the deal flow, like you just,
it's like you're trying to understand the world through a pillow, you know? Yeah. Like you just,
you need the, you need that, you need the fire hose, you know? Do you actually give it a
name and like founder led sales,
is that what you call it?
Or do you just fly?
I mean, people give it names.
Yeah, people give it names,
but like, I don't know.
I just think of it as like as a leader
and as a product person and as a CEO.
Like, I don't know how to do it differently.
And I've had the privilege of working for truly great salespeople.
Like Barry Chris,
who was the CEO of Chef for a very long time,
is a world-class sales guy.
One of the very first things he did at Chef was took,
was watch me try to do sales.
And then he was like,
that was great.
I'm going to do it better tomorrow.
And then he did, you know, he just, he just showed, he was like, I'm going to rewrite the pitch,
trust me.
And I was like, great, I'll follow you.
And he did.
And it was incredible.
And it changed the trajectory of that company.
It was amazing, you know?
And like, you know, good sales guy.
Do not, do not underestimate it.
But that's because I knew what it was.
You know, that worked because Barry listened to me.
And he was like, oh, Adam, I see what you're saying.
I see how this goes.
Here's how we could change the way we're saying it.
You know, he didn't change any of the details about what I was saying.
But he changed how he was saying.
The layout was, the packaging, sort of how it all came together.
You know, and like, but if you don't know those things, you know, if you can't do the first part,
which is explain it to a sales guy and be like, here's how we talk about it and here's why it matters
and here's what people value and here's what their response is.
And the best way to learn that is to just be in the trenches with people, you know?
Like, if you're listening and you have an infrastructure problem, you know, like, right the second is the window
where I will come sit in your house and build infrastructure with you.
And like, you don't have to pay me money to do it.
I'll just do it.
I'll just come, and it'll be awesome.
Like, we got on planes, we go wherever.
So, Adam, what do you know about Boots C?
I'm just kidding.
I don't know a lot about it, but I'll go.
Because, like, because that's the stage you're in.
That's how you learn to sell.
You know, you don't learn to sell things by staying at home.
You learn to sell by going out into the world and, like, make it happen.
So what you're saying is that there is an opportunity,
you will literally fly to them, sit here and hang with them and show them
system initiative in their world or how.
Yeah, I will say.
sit in your, I will literally come to you and I will, and I will help you automate your
infrastructure with system initiative because I want to, I want to learn, I want to learn what it's
like to do it from your eyes. And that's what, that's how you learn to sell the software, right?
You don't like, you don't learn, you don't sell software by putting up blog posts and being like,
I hope, I hope you read it and figure it out. Like maybe, but like the actual way you do it is
like, oh yeah, you have a problem. Amazing. I, I, I'm fascinated by you and your problem. And I
cannot wait to help you solve it and like let's solve that problem. And then you do that enough
and next thing you know, you've written enough blog posts about solving people's actual problems
and talked about it on LinkedIn enough that people are like, I bet this thing would solve my
problem, you know? And next thing you know, it does. And but that happens because you, you know,
because you get on planes. It happens because you like talk to everyone. It happens because you,
you know, in the early days a chef, I solve people's puppet problems, you know? Like they'd come to me
and be like, oh, I have this puppet infrastructure thing. And it's like,
Like, this thing's biting me.
And I'm like, oh, I had that problem.
That's why I wrote Jeff.
But here's how I fixed it before I wrote Jeff.
And, you know, spent a couple hours just hanging out fixing someone's puppet.
And like, you know, because that's how you build community.
That's how you get people to care, you know?
Yeah.
Are you flying lost then?
You on the plane like once a week?
Yeah.
I mean, I'm flying.
When's your flight today?
Are you flying today?
I'm not flying today.
And a lot of people, weirdly enough, don't want you to fly anymore.
yeah they're like nah stay there yeah they're like that's they're like i don't know man that's weird
the pandemic happened you know once again old guy maybe we just do it on zoom which is fine i like
face-to-face person i'm happy to do it on zoom too it just hits different on zoom it's good i do all
my sales for us via zoom i fly nowhere and we don't have a bad job at sales but but like you know
if you could run a couple weeks you know i think there's no question to me what's better if what you're
doing is, is figuring out how to do complex infrastructure automation or complex sales.
Like, you know, the time I spent sitting at Meta when they were Facebook, like just with them
automating data centers, like, that was invaluable, you know? And it paid dividends in that
product for years. And, you know, you can't get that by, by from a sales call, you know, I can't
get that by being like, well, yeah, you should definitely try chef. You know, I'm not going to come help you.
going to, I don't want to actually be with you. Like, no, this is what I do for a living. I love
this. You know, I love infrastructure. Like, like, all of those moments, they're not, it's not like a
chore to have to fly somewhere and hang out and automate some infrastructure. Like, that's a,
that's a blessing because you're like, yeah, I get to see this real gnarly problem. I get to see real
people like using the software. And you learn so much about how to sell it, about how it works,
about their environment, about what people need, you know, and there's no, there's no real
replacement for it. And Zoom, you can get some of it, but, uh, but it's a lot harder because
it's, it's just harder to get people to open up, you know, like when you're in person,
you can be like, it's, you can be fun, you know, you can crack a joke. It's hard to be fun on
Zoom. It's hard to be fun on Zoom. It is. Even on Zoom, like people come intended to be
allowable to be distracted. Yeah. I can, this is a call where I can be rude. Right. And it's, it's okay.
Like, I can check Slack, I can check email, I can look at my phone while on a Zoom.
But in face-to-face, in our real, that's not a cool thing to do, right?
And you generally don't do that.
When you go sit at someone, when you go sit in a conference room with people and run a project for a week or two, like, where it's all dedicated to their point of view and all dedicated to their problem, like, that's the best, that's the greatest.
That's the funnest thing.
And, like, I love doing that.
So, yeah, if you're listening and you're like, ooh, I'd love to do that.
like Adam at system init.com, let's go.
But he's not hiring, okay?
He's not hiring.
I'm not hiring,
but I am going to come fix your infrastructure for you.
Okay, got you.
I like that.
Better deal.
All right.
If you get infrastructure problems,
I'll buy for you son,
but call Adam and he'll take care of you.
Exactly.
Precisely.
There you go.
All right.
All right.
Good stuff, Adam.
Thanks for going on.
Oh, it's always my pleasure.
Always a pleasure.
Thank you so much.
Stay cool.
Be awesome.
It was fun.
Super fun.
Okay, we covered a lot of ground on this one.
I'm sure you have thoughts on the AWS outage,
on the AI bubble, on the MS DOS era for agentic systems.
Does Sam Altman really have AGI locked up in his basement?
Let us know in the comments.
Link is in the show notes.
We love hearing from you.
Thanks to get to our partners at fly.io,
and to our beat freaking residents,
the one, the only, the brakemaster, cylinder.
That's all for today, but we'll be back in your earholes on change log and friends on Friday.
Game on.
