The Changelog: Software Development, Open Source - Setting Docker Hardened Images free (Interview)
Episode Date: February 4, 2026In May of 2025, Docker launched Hardened Images, a secure, minimal, production-ready set of images. In December, they made DHI freely available and open source to everyone who builds software. On this... episode, we're joined by Tushar Jain, EVP of Engineering at Docker to learn all about it.
Transcript
Discussion (0)
Welcome, everyone. I'm Jared and you are listening to The ChangeLog, where each week we interview the hackers, the leaders, and the innovators of the software world.
In May of 2025, Docker launched hardened images, a secure, minimal, production-ready set of images, and in December, they made DHA freely available and open source to everyone who builds software.
On this episode, we're joined by Tushar Jan,
EVP of engineering at Docker, to learn all about it.
But first, a big thank you to our partners at Fly to I.O.
The platform for devs who just want to ship.
Build fast, run any code fearlessly at fly.io.
Okay, Docker hardened images for all.
On the change log, let's do it.
This is the year we almost break the database.
Let me explain.
Where do agents actually store their storage?
They've got vectors, relational data, conversational history, embeddings, and they're hammering
the database at speeds that humans just never have done before.
And most teams are duct-taping together a Postgres instance, a vector database, maybe Elastic
Search for Search, it's a mess.
Our friends at Tiger Data looked at this and said, what if the database just understood
agents?
That's agentic Postgres.
It's Postgres built specifically for AI agents, and it combines three things that usually require three separate systems.
Native Model Context Protocol servers, MCP, hybrid search, and zero copy forks.
The MCP integration is the clever bit your agents can actually talk directly to the database.
They can query data, introspect schemas, execute SQL, without you writing fragile glue code.
The database essentially becomes a tool, your agent.
agent can wield safely.
Then there's hybrid search.
Tagger data merges vector similarity search
with good old keyword search into a SQL query.
No separate vector database, no elastic search cluster,
semantic and keyword search in one transaction.
One engine.
Okay, my favorite feature, the forks.
Agents can spawn sub-second zero-copy database clones
for isolated testing.
This is not a database they can destroy.
It's a fork.
It's a copy off of your main.
production database if you so choose.
We're talking a one terabyte database, fort, in under one second.
Your agent can run destructive experiments in a sandbox without touching production,
and you only pay for the data that actually changes.
That's how copy on right works.
All your agent data, vectors, relational tables, time series metrics, conversational history,
lives in one queryable engine.
It's the elegant simplification that makes you wonder why we've been
doing it the hard way for so long.
So if you're building with AI agents and you're tired of managing a zoo of data systems,
check out our friends at tigerdata at tigardata.com.
They've got a free trial and a CLI with an MCP server.
You can download to start experimenting right now.
Again, tigerdata.com.
So for as we have supply chain attacks caused $60 billion in damages in 2025,
triple what they caused in 2026.
2021. Every language, every ecosystem, every build stuff, they're a target because who does not use Docker.
And Docker's response was to make hardened container images free for everyone. We have head of engineering, Tushar, here today to dive into that and all the things that come from it.
So welcome to the show, Tishar. Thank you. Excite to be here. And excited to talk with all that.
Where we begin in such a deep topic. I mean, you got vendors out there that had products around this.
you got the desire to secure the supply chain.
You have a brand to protect.
You got development to protect.
You got bills to protect.
You got a lot of responsibility.
I mean, it's a big job you have.
But where do we begin to unpack the reasoning and decision making behind this choice?
Yeah.
Maybe I can, let me talk about how we think was supply chain,
supply security and our current role in it.
And then how that evolved to Docker images.
And then eventually why we made it free and how we see that.
What exactly is Docker?
harden images. We can just explain that too. Perfect. So let's start at the beginning before you
even that. I'm going to see everyone knows Docker and Docker Hub. Everyone builds containers,
you use images. Docker Hub is effectively upstream for open source container images.
We get billions and billions of polls per month. Everyone pulls from us. And these are the
repositories or images of like, you know, usable open source software and not just like upstream
base images, but like, you know, I want my SQL on Debion. You get that from us. That works.
great. We've been doing this for a decade.
But we basically keep up with upstream.
As a result, images do have CVs.
They've lost to CVs caused by multiple reasons.
One, bloated stuff, bloated packages that are built for usability first, as a result,
have many packages in them, or just not patching fast enough.
Harden images, and this is a concept that has started in the industry even before we launch
a product, which is, let's first minimize this problem.
What people started doing was we left scanners in production.
They'll see when there's a CV.
We'll go alert to some teams.
They'll have to go update and patch.
And this is the world that lived in.
Instead, why do we need all these images?
Can we first minimize them?
Let's get minimal packages that are only what we need.
Second, can we have someone patch these faster?
And then we drive that.
So we release the burden on engineering teams.
This is the movement that started.
It's very natural for Docker to do this.
And so we launched Docker on images as a paid product early last year.
These are hardened images, base images, app images that are minimal low-to-no cvEs backed by an SLA by us.
When we launched, we had a little into catalog, and we've been aggressively growing that.
Our vision was always, like Docker is like broad adoption, get tooling and content out to everyone.
So vision was always, we need to make this accessible to everyone.
And then for enterprises, we provide things enterprises care about, compliance, and we can cover what is an enterprise package.
but for everyone out there,
they should be able to get a great starting point
and a secure starting point.
So that was a vision always,
which had to build up to that.
So that's what we got to last year
and launched that out there.
So this is a paid product for a bit there.
And this is a big deal because you're letting revenue go
by making this choice.
Yes, I know.
So it was a paid product.
What we did is basically launch a large catalog
of,
we've made our entire content,
entire catalog available for free,
nearly most of it.
What's paid is stuff that enterprises
would care about still.
So what's free is like any developer,
any project that open source projects
have been adopted in the asset scale.
In fact, like, N-A-N is probably
the largest open-sus project
that's moved to this, right?
Which is what we want.
We want everyone to have a secure starting point.
But now, if you want an SLA,
like any place where there's a C-SEL,
they want things,
I want an SLA commitment behind the patching.
I want FIPPS images and Stakes's images.
I want support and patching
on images.
that are old, like outside LTS images.
That kind of stuff is in a paid product.
I want deeper, more scalable customizations.
Those are in a paid product.
So we still have a paid tier.
And that's basically the ad to add to it.
There's a CSA who cares about a bunch of stuff.
That stuff is in the paid tier.
Feature is for every developer, every company up there.
Gotcha.
So table stakes, it seems is S-BOMs, SLS.
I didn't know there was an SLSA out there,
but there is.
SLSA, SLSA, build-level
provenance and cryptographic signing.
You're making those three things.
Those things are tables.
Everyone gets a nice bomb.
Our built pipeline, salsa is salsa.
That's how you say it, salsa.
Salsa.
That's how you say it's salsa.
It's fun.
There you go.
I'm not going to trip over my acronym
ability then and just do salsa.
And so that is how we build these.
So we have a Salsa Level 3 built pipeline.
We actually open sourced our builder for this.
What's been interesting is to do this,
we have to change how we build images.
is still using like Docker Bill can
underneath the covers,
but we moved away from Docker files
to our own semantic layer
to own new which build these well.
I can explain that.
And then our build system.
It's been interesting as we've done this.
Supply chain security is a broad topic
and secure content is one part of that.
Securing your build system is another part of that.
That's key.
So it's been interesting.
We've done this.
Lots of companies are interested in our build pipeline.
So that's the next thing we're looking at is
exposing that as technology to everyone
for, so everyone can have.
secure bill pipelines.
And we'll keep going down this road of like how to basically see the supply chain is just critical.
Not just for traditional container stuff, but you know, we'll also see this with AI where we talk
about that.
You said you moved away from Docker file.
Is that right?
Concretely, I mean, it's still using the same technology only the covers, but just we have,
we built our own, we open source this too.
We built our own YAML syntax here just to make bills more repeatable.
Like in Docker files, you can shell out.
You can do stuff.
So remove that.
So in this syntax, it's very repeatable.
is reproducible.
So you can actually do these in a way
where you meet salsa requirements.
It's still a bill kit underneath the covers.
Can you unpack briefly S-Bomb
what that means why it's important?
And salsa, which is obviously how you say it.
Adam, I mean, come on.
Yes.
I knew that the whole time.
Yeah.
Did you know the whole time, Jared?
No, I did not.
I have no idea.
I also don't know the first time.
It was good to learn.
SLS is this?
This is like a different version of an SLS?
At some point.
of us learn on the air here, which we've been doing it for years. So we're not easily embarrassed.
Go ahead. Let us know what it means now. So S-POM is simply software build materials. All it means
is to be in a container package, there's built in many layers. How do you know what's in it?
And then second, you want to know not just what's in it, but how is it built? Everything
about how everything was built and signed. If you look at S-POM packages, it can say like,
okay, here's not just a top layer image that's there in all the packages. Here's everything else
that's in it. And we also
sign it, we can capture aspects
of the built environment, where it was built,
various aspects of the node, etc.
So there's a bunch of detail that's there.
That's typically important for,
that's important for one
provenance, you know, where the thing came from.
Two later with this compromise,
you can trace what's all impacted and you can manage
that. So S-BOMs are critical, and then
with containers, they're complicated just because of all the
layering that can happen. And so we manage
all that transitive dependency and, like,
really full S-bomb. And then many tools,
all the scanners can pull from that
and understand what's happening there.
There's three things to explain here.
There's S-BOMs, there's salsa, and there's Vex.
I'll cover Vex then second.
Vex then is, I'm going to get what it stands for wrong.
I'm going to make it up and assume he's correct.
Something that can correct me in the comments if I'm wrong.
But it's like vulnerability exceptions, I think,
which are a lot of times you have CVEs that are reported.
But if you look through it,
the maintainer knows,
these don't actually apply.
And so you can produce VEX statements
and that is also, you know,
we stand behind that or the upstream maintainer
has it behind it.
And then scanners can understand that
and know like, okay, these things don't matter.
So it reduces scanner noise.
And what's good to do it this way,
a lot of times what other people can do is like
in the S-bomb, they might obfuscate,
or they have a CV fee that says,
here's the CVs.
Instead, we say, here's everything.
Go pull it from anywhere else.
then we'll tell you which things we think don't matter
and we're standing behind that.
It's just much more open and transparent.
And salsa stands for,
it's an open standard,
it stands for how you build a thing
and you build environment.
And there's various levels,
and salsa three just means it's reproducible,
it's reproducible, hometic.
It is not being tampered with the building environment itself
doesn't have access.
It's a way to secure your build pipeline
that we're standing behind.
So you know, like, okay, nothing's being tampered with all the build pipeline itself.
Gotcha.
And so that's all a bunch of, just maybe, that's like, you know, to give an example,
all the machine really goes into doing this stuff.
So stuff we're putting the free, you know, at least we're giving revenue.
I think if it is actually, no, like, look, our business model is very much.
There's both the business model and ethos here.
Ethos is very much, get this out, the communication runner, Roper standards.
And business model is, sure, we'll drive, that's top of funnel to a paid product.
But as well, we do put a lot of effort and value.
that we give out into free, right?
Docker Hub is free and we're giving the free tier for Docker on images.
There's a lot to just goes into that.
Let's just say I'm a working developer with a couple of servers out there in the wild
and they're all Dockerized.
Maybe I got a Postgres server.
And my base image is like Debbie and or Alpine or something basic.
And then I apt to get install Postgres and I, you know, my Docker file does all the things or whatever.
What do I gain by switching to a Docker hardened image and what do I potentially?
eventually lose or what might I hit up against when I try to do that?
Yeah.
So in terms of what you gain, so like two ways, we generally have two flavors of
hard images, like a development image and a production image.
In production, in development, you want stuff.
Like you need a package manager.
You need shell.
You need debug.
You know, all these things, right?
You need the Visi debug.
Cool.
We'll give that to you still minimal.
We'll even then we recommend a multi-stage build.
So for your production images, you don't need that stuff necessarily.
so minimize that.
There are trade-offs here,
primarily to do with usability
and how you manage that.
First, the images you get from us,
maybe you need a few more packages.
We've customized built for that.
You run that through a build pipeline.
We'll add those in.
Those are still hardened packages we're putting in,
and you still get all of S-Bomb,
Salta, all that carries forward.
But, you know, if you've just built a project
and doing it, a lot of times,
a lot of people, migration is really easy.
Sometimes you've done stuff
where like, okay, I have to figure out what's my trade-off between usability and security here
and what am I managing there.
And if I've built my system in a way where I can't split this up well or I really depend
on like, you know, shell access in production, then those are tradeoffs I'm making.
And so those are typically the challenges that a number of projects can run into.
But honestly, for the most part from a lot of our customers, we hear like the vast majority
of their projects are able to migrate easily to this.
We are also looking at building like an agent here to help do this.
We've got initial versions of running internally.
We use it internally.
And then we'll start building that out, like, you know,
how much can we help people with complex migrations here.
Yeah, that'd be super useful.
So what did adoption look like back in May and then what's it look like since then?
Is this something that everyone's just like it's a no-brainer?
Obviously, you might have some headaches, but they're worth it or people more tentative.
What's been the reception?
Yeah.
Maybe just as I did it first.
We did a webinar, I want to say a couple of weeks ago, something,
and forget when.
It was pretty broadly attended.
I think my favorite question from that was,
so is there any reason I shouldn't use DockerRod images?
Right.
It really is like no one, like, no, you should.
There's really no reason why you wouldn't want to just have a hardened image, you know?
There's really no reason, right?
This is part of reason for opening this up.
Like, you should go do this.
So early on, we had lots of good traction with the customers and working on enterprise deals.
Since we've open source,
this. Also, to be fair, we open-sosed it right before the break, like, I think December 16th or something
is when our, not open-source, but the free tier. That's when our launch was. But even then,
we saw immediate interest and pickup. And so we're tracking open-source packages, adopting this
dramatically. Like, I mentioned it's gone to it. And then with customers, it's resonated. So, like,
C-Sos like it, head of platform, like it, in part because, you know, we're seeing player will be
hoped, which is, okay, now
someone of the team, typically
someone has a mandate of like, oh, I should go
and people care about this problem. The
barrier for them to go adopt and try it on
and see the benefit is low.
It's basically zero. They can just do it.
And then they're like, okay, now I want
all these, you know, additional
security guarantee that is really great. Now we can
have a conversation about the paid
tier. So
we've seen an uptake for sure
and this motion playing out where
like,
opens its adoption,
double funnel with companies,
and then we start working through,
working through this.
You know,
this is still one of those things,
I think people have to take time to work through,
because, like,
people adopt it,
they have to get on security,
they drive it,
but it's starting to see,
starting to see this grow quite a bit.
You mentioned releasing this announcement
right before the break.
Adam, didn't you have some feedback
on that timing of this announcement?
I think it was like,
I think anybody would have feedback on that timing.
I mean, come on now.
Yes, the worst time ever.
So I'll sit in counter that.
It's the worst time you never.
Just because before the break.
I don't know.
I feel like holidays actually now like when all air products get released.
It's like the moment you release all the things anyway.
Right.
Everyone's at home tinkering.
They're like, we're going to get our product out there for people to tinker it with it.
Yeah, I didn't know it was December.
That's for sure.
Right.
Yeah, I was like, I don't care what month it is.
I'm here and I'm invading.
You better follow me or be left behind.
I suppose there's no really bad time.
It's just that whenever you want to get good fanfare,
now you're playing a month long launch plan versus a single day with a great precipice
and a lot of attraction.
Now it's just,
I think you just made it hard on yourself,
basically.
Yeah,
it's fair enough.
We'll take that.
I honestly part of was just like,
let's just do it and get it out versus come back in general.
Let's just go.
Right.
That being said,
though,
I mean,
there's no good time like secure today.
You know, like security, I would, I would rather use secure today than tomorrow.
Yeah.
In every case, because don't delay security.
I think, you know, just timing is not the best because we couldn't do the show in December.
We were away.
We were taking our breaks.
We're talking about it now.
But here we are December, January 28th talking about it.
Super important, though.
I mean, I think, you know, the one thing that I'm reading here is obviously that, you know, when you make a change, when Docker makes a change, when you change, when you change the
default. It's a ripple effect about the industry.
Yes. And I think about, one, the effect of that ripple and then two, creating that ripple.
My gosh, behind the scenes, what kind of thinking, what kind of specifications, what kind of
planning? How do you architect this new vision, this new build pipeline from Bill kit to all
the, all the free artifacts that are given away for free and then re-changing how you productize
it to create revenue as a company? I mean, it must have been.
head of engineering undertaking.
You know what I mean?
It's a whole company effort.
Honestly,
my job is the easiest
in all of this.
I mostly say,
hey,
we should do this.
We should do this.
Let me know when it's finished, guys.
And then,
you know,
look,
Docker's got a great talent
and so People Valley
and do stuff.
In this case,
like this is always part of the thing
we wanted to go do
and drive it.
But then, yes,
there's lots of stuff
to go figure out.
Starting first and foremost
with
don't put stuff out there
like it's true for everything
but if you're going to make a big
broad community announcement that we have like
the quality
the underpinnings of this technical
security have to be really stellar
we simply cannot do it if you don't do that
in part because
you know not from a brand damage
someone will call us out like that too
but first
we are the source of supply chain
like people will take what we put out there
and yes we'll get
you know if you do something bad so we'll find out we'll figure out
But really it's like a responsibility of like whatever we're doing is going out there.
So we have to deeply, deeply care about that.
And so that comes from like just the team that's on this and the experts we have on this
and like going deep here.
Right.
We've got like decades in these areas.
And they've got a bunch of strong people here working on it.
And then there is our product and strategy and all that we work through like, okay, how to actually get this out.
Match community, manage customers through this and work through all of that.
So yeah, this is definitely a whole company effort for us to go take on.
And it's the start of what we're doing.
This is just a start, right?
Like the vision is secure your entire supply chain.
For those Java, it's like, you know, void main down.
Like, we want to address everything.
We can get to packages, get your bill pipeline, secure your entire supply chain
as we can get policies out because we sit everywhere in your DLC,
from laptop to CI production to registries, content dressed.
Let's try to get the point we can secure.
all of it.
Well, friends, I don't know about you, but something bothers me about GitHub Actions.
I love the fact that it's there.
I love the fact that it's so ubiquitous.
I love the fact that agents that do my coding for me believe that my CI CD workflow begins
with drafting Toml Files for GitHub Actions.
That's great.
It's all great.
Until, yes, until your builds start moving like molasses.
Get Up Actions is slow.
It's just the way it is.
That's how it works.
I'm sorry, but I'm not sorry because our friends at Namespace, they fix that.
Yes, we use Namespace.0.S.O.
to do all of our builds so much faster.
Namespace is like GitHub actions, but faster.
I'm like way faster.
It cashes everything smartly.
It cash your dependencies, your Docker layers, your build artifacts, so your CI can run super fast.
You get shorter feedback loops, happy developers because we love our time, and you get fewer.
I'll be back after this coffee and my build finishes.
So that's not cool.
The best part is it's drop in.
It works right alongside your existing GitHub actions with almost zero config.
It's a one line change.
So you can speed up your builds, you can delight your team,
and you can finally stop pretending that build time is focus time.
It's not.
Learn more.
Go to namespace.com.
That's namespace.
dot SO, just like it sounds like it sounds.
like it said, go there, check them out.
We use them, we love them, and you should too.
Namespace.s.o.
Can you estimate the time to shipping from the point where the phrase
Docker hardened images was like a white on a whiteboard somewhere or in a product
roadmap, like we're going to do this someday to deciding we're going to do it now.
And then from that point till either December, 16th or May, when you actually shipped the original.
Let's see.
I'll try to jog my memory.
So I think we'll say is,
so when you see a joint,
Don, Don, and I would say, I think,
Feb of last year.
That's when the site is rolling around,
and the first team is like, yep,
we're going to do this, we're going to launch it.
So I think from that point,
even if people are skill set,
but like formed a team at that point.
So, I don't know, early February,
Feb, mid-Feb, something like that.
And we got it out into,
like, limited-villed release,
like early limited release in, I want to say, three months.
With customers jaded, you know, within the next three months.
So that got us to like summer, man,ish.
Then we grew, kept growing, going, growing.
And then I think the real, we knew we wanted to make it free.
We won't show when.
But I think the real thing is like, oh, we should work towards free.
December.
I want to say it was like, honestly, it was like maybe early November,
I mean, November was like, okay, we're doing this.
Or like right around, yeah,
because close to Thanksgiving is whatever we call, somewhere on there.
And so from then until there was like a probably like a four-week sprint.
That's all pretty good.
That's all pretty impressive.
You said you have a good team there.
I mean, that I was expecting longer.
So I guess, you know, congrats to you and the team for really a pretty quick turnaround.
Yeah.
Within this day and age, you don't have time.
Everything.
You don't have time.
Yeah.
We're not getting done yesterday.
You better get everything done yesterday.
Because also the suburbator containers, but like, you know, this, this, A, this mode of working is critical for us for everything we're doing.
And, you know, we'll recovery AI.
We'll talk with that, like, it's in that space in particular.
The timeline I just said in the air world has to shrink 10X.
Yeah.
So this muscle is in general as an energy organization critical for us.
This part of the conversation talks about the time.
Eight months is what I roughly kind of captured there to go from.
Docker Harden images to GA to let's make it free, let's release it and it's released.
But the tension behind it has to go back beyond that.
Because one thing that was mentioned in the announcement post was,
I'm going to quote this.
It says, and while some vendors suppress CVEs in their feed to maintain a green scanner,
Docker is always transparent.
So there's this, it seems like if I'm reading this correctly,
you got Docker, which is, you know, the supply chain essentially.
of images, Docker Hub and the trust factor.
And you've got vendors out there who have been doing versions of this seemingly not being fully transparent, making their builds green when they're actually not green.
Can you speak to not just the cycle to get here, but the tension that rose to say, we've got to take this on, we've got to make this a, the way, this default standard that you've made it.
Can you speak to the tension and what it's,
to sort of own the responsibility.
Yeah, absolutely.
So, you know, these ideas go way back, right?
You can go to when a Google distiller started, right?
And so, like, none of the ideas go back.
And so discussion was being the like, what should we do here, what to build,
or how to manage this.
I think there's been lots of, like, in the past discussion of how should we do this?
And then then it's like, is there a big enough business here?
Should we go after this?
How do we think about this was this?
what else we're doing
across the company, etc.
So this discussion has definitely been there
for some time.
And like what's the best way to do it?
I think a few things came together for us
in February.
One, we should change clarity
of like, yeah, we're doing this stuff
you're going to do it.
Second, I see on the technical side,
like clear clarity
and like how we should do this.
Like, no, the supplements were like,
we're going to do this differently
and here's how.
Even concretely, like,
VEX statements are a thing
the industry is adopting now
and we're helping driving that.
We've broken all the scanners
of like this is why you should adopt it.
It's a standard, but there's not universal adoption of year.
And we're like driving that forward and making that happen.
So I said the tension is definitely there.
But before that, I've been a year and a half.
I'd say even before my time, it's been there.
It's one of those topics that's, you know, been in industry for a while.
And then the real thing was like, nope, we should do this.
And that was both business clarity and second, I'd say, technical clarity on how to do this.
And then on the speed, we built a bunch stuff, but we get to leverage a lot.
lot of Docker underpinnings, right?
We've got Billkit here, we've got Docker engineer, we've got Hub here, like, we get to
leverage all of that for how we get to go drive this and make it happen.
Yeah, I think the core part is just realizing this will come from like, we are a, not really,
like we are a co-part of supply chain, and so we have to start not just the kind of stuff
we do, but take on the broad responsibility of how to secure the supply chain.
There's both a business opportunity, but it's also almost like a responsibility, right,
given our position where we are.
Can you go deeper into this Vex you've said a couple of times, vulnerability, exploitability, exchange.
It seems like, and I'm not steep deep in this.
I'm learning.
You know, that's, that's, I'm pulling back the Google results on this stuff.
Yes, I still Google here and there because it's just easier sometimes.
It seems like this is a way to be transparent, a software suppliers to be transparent
about particular areas where you're still vulnerable, but you're able to do so.
it seems like in a community mindset where, hey, we've got this thing, we're delivering it.
It's not fully green.
And these are the areas where it's not green.
Can you speak to the behind the scenes and what that exchange actually is?
Yeah.
So the way actually used it is can have packages.
So typically, if you're distro, you have packages and then you have your own CVE feel.
Okay, we will tell you what are the CVEs here.
And that's one way to control it.
You have this root issue often of like.
well, there are CVs, but like, you know, the CVs in the national database, but they're not actually an exploitable CVE.
And, you know, we don't think it's actually exploitable in our, in our code base or the way this works.
So if you publish your own CV feed, you can just not publish it, and that's when we do it.
We take a different approach where we publish fully transparent as bomb.
Scanners can take that and they pull the central CV feed and they see the CVs.
Then we publish the VEX feed that says, okay, here's the ones that we don't think matter.
in the other approach, you're missing that
that transparency and that logic of like,
oh, here's everything, here's what we think don't matter
and here's why.
And which is a better approach,
because then we can talk about it, right?
And we can see whether you agree or don't agree with us.
If you can figure that.
Also, like, for Csos on the rent, for anyone else,
it's very clear what's happening.
So that's the sort of thing we're doing,
the approach we're taking here.
Now, this has been a standard for some time,
it was just like never, as I can tell,
like, broadly adopted yet
because like, with scanning,
as we're working through.
Some had it, some dent, and now we're working with all of them,
and they're all getting it in there.
And what it seems like is it's a focus on what is exploitable versus the things
they're not.
So you still have, let's just say security concerns,
but these are the ones that we should pay attention to.
These are the ones that are actually worth paying attention to and actually cause real
harm or damage.
It's both.
We put everything in there.
It's a way to annotate stuff.
So we put everything there, like, here's the ones that we think content is what's coming,
but then also explicitly which one.
are not exploitable, we put that in there too.
So we cover all of that in there.
What about this tension?
Can you go, can you go maybe one layer deeper in terms of who has been the supplier?
So you got Docker, then you got third parties, not so much by name necessarily, but like,
what are their roles in the supply chain?
And why has this move to a free tier with these kind of table stakes requirements been a great move,
compared to the prior, you know, the prior way.
Sure.
So maybe we think, maybe the way I'll talk about this is,
so yes, I think like, you know, Docker Hub has been,
I'd say the easily the biggest main registry for OPAs is continued images.
There have been other companies that have come up that are selling hardened,
continued images, right?
And so that's been a model that there are other business models that companies have come over started doing.
So then the question for us was like, well, one is like very natural for us to do that.
So we should look at doing that.
And it's a thing we've discussed and not done explicitly.
So it's a very natural thing for us to go do.
I think the tension talk is more just like DockerUp could have remained just the open source usability first place.
or really it's like, no, like,
Docker should take on supply chain security all up.
And I think that was the sort of change in like
our product and business thinking.
It's like, if you look at our like sort of product strategy pillars,
supply chain security should be a core part of it
because we are a co-part of supply chain,
not just for images, but also where our Docker engine is, right?
It runs everywhere.
And so we should take those two things and drive supply chain security everywhere.
And so that was, I think, the sort of mental frame.
change that was needed here for us to go drive with this and go do this.
And now the other thing for us from making this phase two parts.
One, it's a general approach of broadband adoption and then drive, use that as the funnel.
But maybe second is like, you know, we have a holistic platform and supply chain security
and secure content is one part of that.
So that's why for us, maybe there's some amount of, you know, revenue impacted, but I don't
actually think so.
because for anyone who needs compliance guarantees,
there's a paid tier.
This is a broad adoption.
But this is a,
this is one pillar of our business,
not the entire business, right?
So that lets us go do things where we get broad adoption for the community.
It seems like very much a long-term play.
Like this is not a short game play.
This is a long gameplay.
And, you know,
Jerry,
we just,
we're about to release this episode.
I think it might be out.
I don't know if it's out or not.
I don't know.
About securing NPM,
this reminds me a lot like that.
I'm wondering.
to start if while you were in this tension period with the ecosystem and realized the responsibility.
And then in this announcement back in February internally, hey, let's do Docker Harden images.
Let's actually put the effort here.
Let's do all the research.
Let's figure out what we have to tie together.
And let's make a concerted plan to execute.
How did you look at the rest of the world in developer land to say, where are the supply chain attacks happening?
And what are their issues?
Because there seems like a responsibility you've taken on and just put it bluntly,
GitHub is not with MPM, at least based on our current examination of the situation,
you've taken the responsibility and made a concerted effort and launched it in eight months.
And you've done it regardless of maybe hearing this conversation, regardless of potential
revenue loss, I think it's a long-term play.
And you're adding trust to the layer and security to the layer, which is good for your brand long-term
and good for Docker and me.
Like, I got a home lab.
I'm launching Docker.
I'm just on daily.
You know, I want that to be trust.
Just and secured.
How did you look at the rest of the world when it comes to supply chain attacks or supply chain security?
Was NPM one of the examination targets for you?
Yeah.
So numbers of software to unpack that.
Absolutely. One thing we before do that, just the revenue topic first.
I actually think of this is a revenue accelerant for us to be great clear, right?
Like actually think this is revenue.
Like we're having this conversation because we launched Docker Hines for free or listen and listen to it.
Hopefully, many people could go use it.
And within companies, they'll want the stuff the Csos want, and that should really lead to them calling us.
So, like, the reach, basically a reach should expand here, right?
So I view this is like a revenue excellent for us and we're starting to see that play out.
Just in the very front.
On the other part, you're absolutely right.
Look, I can't tell if supply security attacks have actually gone up or we just like talk about them more.
But there is definitely a market increase here, right?
NPM stuff, but like Shai Huluz attack that just happened.
side note, I love that name.
I just watched the show.
And then I was like, ah, now I know why I should.
So we absolutely saw that and see this happening broadly.
And when we look at that, this is what I was saying.
This is the start.
We have right now with Docker and Images, we've started securing a critical part of a supply chain.
There's a lot more to do.
There's a lot more that's a new supply chain.
There's packages, this runtime.
And so our ambition is to get through all of it and start looking at it all.
mostly because it's just the attacks are increasing.
And supply chain attacks are the ones that have massive impact, right?
They just ripple out.
And so what you see is a critical,
a business need and a need for like software across the world.
And then it's also just critical foundation needed.
I think if you're going to live in a world where AI agents are writing more software.
Like if you don't have secure foundations,
that life is just going to get way, way worse.
And so as we look at our AI play too,
we think secure content.
and supply security,
a critical pillar for that, too.
So that was absolutely clear,
all of these things.
And to be clear,
we've not addressed all of them,
but this is why this is a,
this is not a one and done.
We've got Docker images.
That's good.
No, this is a pillar.
It's a pillar.
Now we're going to work on the pillar.
So one thing we didn't cover was the breadth of the announcement of what was happening here.
So if I,
if I understand quickly,
and curriculum might be wrong is over 1,000 hardened images and helm charts are not
available.
That's a lot.
You're building on output.
Pandebian. These are familiar. These are trusted foundations people are building on. And it's obviously
being announced as open source under the Apache 2 license. So DHA is now free under Apache 2.
That's the current state of affairs. Where do we go from here? Like what is in that 1,000 hardened
images in those helm charts? What is not there currently? What needs to be there? What is the,
if now is the flag moment, you know, where else you're going to go from milestones?
Yeah, so a number of things.
One, we're going to do a lot of hardened system packages.
Also, today, a lot of system packages that you want come from option repose.
We're going to start offering our own hardensystem packages, built from source.
We'll patch ahead where and when needed.
So we started doing that, and that'll come out.
We're also going to look at language packages.
We've attacked that language by language.
Go into that and get those out.
On the enterprise side, we'll look at.
long-term support. Typically, packages have like, you know, LTS there after like two years.
You stop getting patches from upstream or three years. We will, you can buy long-term support
from us so we can continue patching. And typically for enterprises, you know, for various reasons,
they move slower. And so that's important there. So we expand the way to think about this
is like expand the breadth and coverage of all the things, of all the content you will care
about. Let's get that out. The next thing after that for us, I think, is
secure build pipeline.
This is not another thing we're trying to look into,
seeing all the interest here.
And so we have to figure out how exactly we'll do that,
but we want to get this out so anyone who's building software
should run on us and get the benefits of Sals 63 build pipeline
and work on getting that out from there.
And then I'd say last thing,
and this is like a much part of the three started is,
well, I really want to, like, you know,
get some agents out here that help you with either migration
or help you with, like, understanding your state affairs
and get you hard to like how to get them secure.
Like basically everything we can do to have the foundation to make it secure
and then help you move towards that and manage that.
So I've been cruising your hardened images directory or catalog as you do.
And I've been looking at a few of these and it's very cool.
I have some questions around like the security summary.
So I'm looking at the PHP image based on Debbie in 13, 92 packages.
So that's pretty slim.
seven tools included if you're on the PHP image.
And it has one medium severity vulnerability,
10 low severity vulnerabilities,
six unspecified severity vulnerabilities.
I assume those are upstream vulnerabilities that you know about
because you're not doing hard in packages.
Like those things are just like you're patched up as far as you can go,
but there's just known vulnerabilities.
Is that what those mean?
Good question.
So the laws and unknowns,
my step, but the medium one, I'm going to go look at it afterwards.
Typically, those, if there's any high, we'd focus upstream or we'd go ahead and do it.
Medium should fall in that category too.
I think for us where like it should be something we go after soon.
So I'll look at that one afterwards, but generally high and critical, of course,
and even mediums, we try to get ahead of and drive quickly.
So when you have, like, say there's this medium here and we don't know what it is,
I can't seem to find if it lists what that is somewhere.
I think that would be a pretty cool addition.
It should be.
If it isn't, yeah, it would be a good addition.
Yeah, that would be a sweet addition.
I do see a full security details and it still shows the vulnerabilities list,
but I can't seem to find it at the moment.
Anyways, is that then, is that a known CVE against a package against one of these 92 things that had been installed?
Yes.
But that doesn't necessarily mean that there is a patch, or is there a patch that just hasn't been applied?
If there was a patch, we applied really fast.
like ours, likely
this is not a patch, you know, or in the unlikely case,
yeah, it's not a patch.
But even then, we've typically
try to go work at it and
get a patch in place.
That's what we do for high. So the medium, it depends
a lot where we are on that. Yeah, I mean,
you have lots of packages, you've got lots of images.
It's probably a
never, an ever,
a never ending task is just to continuously be
Yeah, so behind this, there's a machine
running, right?
Yeah, totally.
Software and people.
So then the other question I have about is the Scout Health Score, which maybe it's not as new to me.
Is that new in general or just new to me?
Scout is something we've had for some time.
Scouts is our own scanner that we've had.
And it's our own scanner that scans everything.
And now we've just put it in here.
So you can see the health score that's there.
We've given it, you know, in how package owners can see the health score for their packages at the publishing.
And now here we've done it.
So anyone can see the health score of packages.
we're publishing here.
Yeah.
With the DHA.
Yeah, that's super cool.
So this one has an A score and it has all the reasons.
Like no high,
no high profile vulnerabilities,
no fixable, critical or high vulnerability,
signed supply chain attestations,
no embedded secrets,
no embedded malware,
like on and on and on.
And I assume there's a score for every image you all have on here.
There should be score for every image
and there should not be any score
a slower than an A.
And if there is,
I'll be follow up on that.
Well,
if we have a,
Let's see.
Filter my scout score and just say, show me the ones that are bees or lower and then, you know, get to work.
Did you mention, actually, you mentioned one thing there, which was like no imbable secrets, etc?
So that's another thing where it's not just for reducing the package of playing CDs.
We go through actually, like, you know, a list of stuff or like what makes the thing secure and ensure that's there.
Like, there's no credentials in there, none of this stuff.
And keep in mind, we're getting lots of patches from Steam all the time.
And we scan every single one of them.
So there's a mixture of AI running here and people to make sure what's happening.
secure. It's cool stuff. It seems like a good step forward for everybody. It's honestly,
for me, it's been, I'm inside the house, so it's, you know, biasing to say, but like,
seeing the team here, just run this and define this and have very strong opinions of how
to approach this one and do it. It's been, it's been really fascinating, privileged to do that,
right? As I've come in here and see everyone who works at this space and do it. Because it's a lot
of depth in here as you've done this. So I'm excited now and a patient to like do all the other
stuff this part of a vision in the space and build this out.
And I'm hoping, if everyone listening, you should go try DHA.
There's no reason not to.
It's too easy not to or too easy to, I guess.
Too easy to and no reason not to.
Yeah, too easy to and no reason not to.
I like this is a better way to say it.
There you go.
So this makes me feel like containers are the way even more so now.
They've already been the way for so long.
And this has been the Docker story arc since, you know,
Solomon to now, essentially, is that it took the world by storm.
We now have the containerized way to do things, deploying applications become easier than never.
And, you know, if there was any scrutiny on how that plays out, well, now that you've made this security mindset, a first-class citizen in the way you deliver, which seems like the obvious way to do things.
Like, to not do it this way, it seems like that's just not right.
Yep.
Is containers other way.
Would you agree with that?
I think containers are the way.
I mean, like in general,
I don't think containers are going anywhere,
even as application paradigms are changing.
At the end of the day, containers are a great way
to bundle up software package at well,
understand it deployed across systems.
Mm-hmm.
1,000%.
Can you speak to the ecosystem and the partners?
So external, Google, Mongo, the CNCF,
sneak, Jaf,
A lot of the players in this space, Circle CI, Socket, even.
We have friends at Socket.
Can you speak to partner level involvement in orchestrating all the things, I guess?
Yeah.
There's a ton of partner involvement, right?
And like a various kind.
So there's scanners.
So like WIS is, for example, we work with them to integrate stuff.
You're like all the various scanners.
So there's a bunch of scanners have to integrate with us.
So drive that.
There are CSPs where they pull images from us on.
understand that. So working with them. There's also interesting things we can do with them over time and figure out various, you know, they have trust centers so have them integrate with us too, right? So all of their, effectively their own scanners and their own registry caches have them integrate with this. We'll do that. Then there are other players in the sort of, I would say, supply chain or a security space, right? So sneak or, sorry, socket is interesting. We actually have a partnership with socket that I think we announced where you can get it.
images from us and we'll integrate socket.
And so you can get the socket firewall
and get their benefit over, I believe, Spy, or MPM,
I forget which one they're on now, I think NPM.
And so the number of, like the ecosystem, like Docker
is in general that, you know, the DevTools,
the Price has lots of players in it.
And the Docker is just like such a core part of the Nexus.
So we have lots of players, we integrate those to do this.
So when we do this, we have a key arm that goes on and drives,
various partnerships.
have strong relations with many people here with Microsoft.
Actually, at MS Build, when we did LA, the limited web release,
I was at MS built last year when you were there too.
That's where we announced it.
And we had early integration with Microsoft for this,
where they would take Docker images,
but if you'd deploy them and keep up with updates
and get those deployed through their pipeline
and to their scanners.
And so the number of these kinds of integration
we're doing everywhere.
And the way maybe to think about this is, if you want to go drive broad change and impact,
us launching it is critical, but we have to go do it through all the various,
the key sort of, you know, systems and players in the space.
Like you can't do broad impact without working with partners.
What does that like? Do you have to, if I'm one of these partners,
do I get early access to documentation?
Do I get early access to maybe an embedded engineer that's, you know, works for Docker,
but actually works for me because they're inside my organization,
helping me better understand and organize the way we work around securing Docker
or working in orchestration as a partner.
How does that play out when it's actually boots on the ground,
people getting commingled, how does that work?
It varies based on the state of the partner state of where we are.
So, for example, we said early with Microsoft on the view of which exchange service.
In that case, we had PMs and engineers connected.
generally everyone went up on a joint Slack channel or something.
And then we're deeply connected.
And in that case, we're doing some go-build.
And it's very early.
So they're getting early access from us and we're working together.
When you're later stage, then you just have connection with partnerships and product people generally.
And then you drive that forward.
So depending on where we are, we do this.
But maybe the approach and the philosophy here very much is succeeding in partnerships is not the job of like a partnership to
department, it's a job of like our company with everyone, right?
So we figure out what's needed and drive that.
That's a general approach for everything, right?
These aren't like, yes, our departments, folks and stuff, but like we have to operate
as like one.
And so depending on what's needed, we'll have engineers plugged in, we'll have essays
plugged in, whatever's needed to like manage this and do this as we work through
with everyone around it.
And so it varies a lot where we are and what's needed as you works with it.
But like it's absolutely like a, it ends up becoming a cross-function team effort.
the default.
Is there a framework or a specification or a substrate that can be borrowed or extracted from all the
work you've done, your team has done for the last eight or nine months accomplished in this
mission?
I'm just thinking like if we want MPM or any other registry out there to have similar characteristics
or similar concerns around security, is there a substrate here that can be extracted?
It says this is the way we secure registries across the board because we look down the line, you've got you've got the idea that you've mentioned hardened MCP servers, for example.
This is a versioning thing around AI.
You know, AI.
That's how can we secure AI?
Then you got things like maybe hardened libraries or system package.
I'm thinking like app or anytime you install anything, like is there an extractable thing here from this effort that you can lead or provide a spec to?
that's a good question
so again the first thought that comes to my mind is actually
I think the first thing is like at least
extracting like the principles
and the like
and goals right being very clear about that
like we have some core principles that have you applied
and I say that because like
the house might defer
depending on the domain of what's needed
right for sure
and so then the second like
what can you extract like at a technical level
to that I am not
sure I'm sure the stuff here
but especially if I think about stuff
outside container images land
then it's interesting and like a little different
but the principles definitely do and the approach does
in terms of common things
that you can pull out there.
I think there are things here for example
like the way we're building
yes we made it for images and containers
but I suspect that like
if you sit down and look at it
core parts of that stuff you can pull out and make it work for like
non-container stuff too maybe right
and I'm winging it a bit here when I say that
but there's like core parts of like how do you build
how do you build pipeline
that should apply, I think, a little more generally to as an example.
There are parts of like, maybe some of the AI agents we're running that can apply more broadly than just for,
because they run at the code level to verify security of all the thousands of upstream PRs,
patches we're getting, right?
So there might be stuff like that.
But if I step back and think like, okay, how do you secure non-containant registries, et cetera?
First of the common very much is like, let's distract the core principles.
and then we can see what components are are careful.
Yeah.
Do you have that in it like a manifesto?
And if not, can you, can you give it to me?
Yeah.
As I was saying this, I'm like, I think I teed up the next question.
Yeah, I really want that.
I mean, I really do.
I think.
Yeah.
Because I even think about like, I want, I want your what and your why.
And I kind of want a little bit of your how, but not all of your how.
Because my hell is going to be a little bit different based on my context, right?
I want to know your what and your why and how you think about the problem because I want,
that's the, that's the intellect.
That's the intelligence.
My how is going to be different if I run, you know, a different kind of registry that is not at all images or container images or around the things you care about.
It's going to be a way different thing.
So don't tell me the how.
Give me the what and why.
Yeah, yeah, yeah, yeah.
Yeah, absolutely.
That's actually great.
And then also, you know, riffing a bit here.
Even though the house, you can imagine if you do that, you still produce maybe like this another, maybe another.
opportunity.
It's good to give me some ideas here.
Maybe the other opportunity to work with CNC for someone of producing a spec of like,
once you've done it, like, you know, what's an S-bomb?
It's a signed artifact saying, hey, here's what I've done.
Here's what's there.
That someone can take and understand and then be like, okay, cool, this passes the bar.
So, you know, if you can agree in the Watson house, then cool, someone else do that
and produce a result in an artifact that captures all of that.
And then just depending on what you're doing, you can still like have a
central, a central like, you know,
reviewer or greater or something across stuff.
So it's not limited to just get data, but like expand more broadly.
There's something interesting here.
You can also do this for like runtime security, for example, I think.
Cool.
All right.
Adam, I think you gave me, you give me an action item here.
All right.
Go right down.
I'm trying to do that.
What's a podcast?
I mean, I want it, seriously.
So the moment you release it, email me personally.
Email me personally, if you don't mind, because I'm going to read it right away.
Done.
So here's the thing about network security for enterprise.
It's usually a six-month project involving hardware, consultants,
and at least one person whose entire job is managing the VPN.
Norleer looked at that solution and said,
what if we could do that in 10 minutes?
What is Norlaer?
It's a toggle-ready network security platform built for businesses, VPN,
access control, threat protection, all this stuff all in one place.
No hardware requirements.
It's built on zero trust principles, which means only the right people access, the right resources, verified every time.
And it's powered by Nord Links, their VPN protocol that's built on WireGuard.
So it's actually fast.
For IT admins, this is a good stuff.
Grainly control over who access is what, fromware on which device, built in threat detection,
scan provisioning for automated onboarding and offboarding, deploy minutes, and scale in clicks.
They've also parted with CrowdStrike to bring Falcon endpoint protection to small,
and mid-sized businesses.
So you get enterprise-grade,
multi-layered security
without needing an enterprise-sized IT team to run it.
Here's an exclusive offer for you friends,
up to 22% off NordLair yearly plans,
plus 10% on top of the coupon code,
changelog dash 10-NordLair.
Try at risk-free for a 14-day money-back guarantee
at Nordlayer.com slash the changelog.
Once again, Nordlayer.com slash the changelog
and use the change log.
the coupon code change log dash 10 dash north layer for the 22% off nor layer yearly plans plus 10%
on top if you use that code enjoy let's talk about forward looking things you know we're here in
january just until in the january going to february uh you've done all this work it's released it's
out there we've got table stakes hardened security out there for docker images what is next you've
got great partners in place you talked about how you integrate with them how you work with them
how you involve them, and then you have hopes for a new trust to be built on in the community.
What is, in your mind, as head of engineering, both leading your team, but also just altruistic
thinking about Docker and its trust level, what do you want to come from all this work?
There's a lot here.
So in terms of what's next and like, you know, how to think we'll like the impact that would be
great to get here.
So first on the current stuff, like I said, we should do a lot more, right?
We've got to keep adding packages, expand the ecosystem of stuff we cover a lot more.
Again, system packages, get into language packages, build stuff at the enterprise layer.
We need to get our secure policies, like the ability to define policy and enforce it across your entire tool chain.
So we're working on that.
Do you get secure bills to all the, there's a deeper map here for us to go work on and drive.
But maybe to your point, like, what's the sort of maybe the very like impact we want here?
One, I'd love to see, like, I'd love to see this be the default starting point, right?
Like, what is needed to get into place?
When I'm building something new, why not start with Docker hard images?
And what all is needed to achieve that?
And I expect it's a mixture of technical and non-technal things that are needed there.
Like, one, it's like, you know, for some needs that starting, like, where did they learn how to start?
How do we make this be the default easy path?
For a lot of people, it's like, I'll just, I'll copy of someone else did.
or I'll just do whatever chat chip you
tell me to do. So like how do we go
influence all these places and let's be
starting point for everyone. Because that's
that should be a key thing if I jump forward
any years in the future, great.
The next popular open source package starts with just
the HR because like why wouldn't it? That's like the thing you do.
And so if you go achieve that. And the reason that matters
to us apart from the, you know, the artistic goal
or just the real goal of like makes office secure.
Like on the business side,
that then very clearly leads to for enterprises,
they get to buy enterprise level security from us.
Yeah, just worry a little less.
You know, one less worry for a CISO
or one less worry for a head of engineering to think,
gosh, you know, our supply chain needs to be secured.
Somebody should do something about that.
Let's just trash the engineers, right?
Let's shift left more, okay?
Put more on the developers.
Right.
even more.
That's one way you could go.
Yeah.
Yeah.
Well, this is very much, you know, just start, start green.
Start green, stay green.
Oh, I like that.
Start green, stay green.
Yeah.
There you go.
You should tagline that.
You should put down the website or something.
If you haven't done it yet, T-shirt that, okay?
There you.
That's how you create defaults, right?
You create a movement.
Do you see that kind of a tangent?
I don't know.
I'm going to riff a little bit.
Do you see that?
I think it was the show billions or something like that on NBC.
I don't even know.
I didn't watch the show,
but I saw the clip where he was talking about lemons.
Do you see this clip ever?
I don't know.
I've seen the show.
I don't recall this clip.
Well, there's a known term out there.
You know, when life gives you lemons, you make lemonade.
He's like, no, no, no.
That's not what you do.
And I'm going to just paraphrase it because I forget.
But he went into this massive, just like deep dive.
Now, you don't make lemonade.
You make lemons scarce.
And he went through this whole story arc of how you make lemons the default.
and you make it a tagline, that's, that's not cool.
That's Lamont.
You know, you kind of give it this cachet of sorts.
Yeah.
You know, I think if you do something like that, you create a movement, you create a change.
Yeah.
That's how you create, you start green, you stay green.
And you make that the maneuver.
And it's essentially, you make it not cool.
Yeah.
It's the ultimate snipe.
It's the inevitable.
Like, this is the way.
And the longer you take to get there, the further behind you are.
Yeah, 1,000% agree.
Yeah.
And so if you go do that, and then also, I don't know, we need to go figure out
to me this be the thing that, like, is the default thing that like every agent recommends
it starts because that's just how we're writing the code now anyway.
Yeah, agent recommends is still a black box, I guess, in a way, right?
Yes.
And you can rag it, but that's just from the side.
It's not from the bottom up.
Figuring that out.
What's left?
I know we covered a lot.
I know you have a big role there.
I know that there's a lot happening around
Docker in general. I mean, this is a big
announcement. There's AI things happening.
What is your stance on things in that
area around the Docker world?
That's, you know, so
we've done all this, but like the AI stuff
is clearly a big focus for us.
And like, so it comes together in my mind.
Maybe I'll talk about like how we think about it.
It's like, you know, how I think about this
a bit. So look, Docker,
you know, everyone uses it.
It's a core part of this. You'll see.
We help code go from laptop reproduction.
And we kind of solve for like, you know, the big growth of apps that happened over the last decade, right?
Cloud native apps.
We moved to the cloud.
Everyone built services.
Containers a way to do that and drive that.
We solve all that with our packaging, with the hub of content distribution, with the engine for building running.
Well, there's two big shifts happening now, right?
Everyone is, the entire SCLC is changing.
You're developing coding agents, but literally the entire STLC, for how you build, test, publish.
run code is going to change and become AI first.
And second, the kinds of application you're writing are going to be agents now.
The next code of apps are agents.
So I kind of see a very natural thing for us is like, you know, I think of Docker as our
job is to help engineers and engineering team securely built and deliver software.
And so we've done that with the last decade for the way things have worked and now we're
adapting that for AI.
A core thing for us that we've had with Docker and Docker images and even security trust we talked about,
but even more important with AI and agents, I think, is trust.
The thing is, if you're going to trust across this layer here for agents,
that'll help people trust in agents more.
But see, today everyone's like writing 10X the code, but no one's shipping 10X the code.
In large part, because it's not core trust yet.
Even when I use an agent, how to let it run, unfettered, do I trust all this output, how to test all of this, how to get this out?
how to know what it's building with.
So that's sort of a framework we have.
And we're starting with the runtime environment.
So we've got Docker, Docker Engine.
This is how you build and package
and run isolated environments with containers.
Well, now we've got a slightly different thing with agents.
So with coding agents, we think,
I frankly think it's crazy.
Everyone runs coding agents, flatten the machines,
and then just like NPX installs MCP servers
and runs them in the machine.
Like, talk about supply chain risk.
And so great.
But to be clear, the productively benefits here are crazy, right?
What I want to do is I want to use an agent and let it run.
I don't want to run a YOLOMO by default.
Go do everything.
But just give me some security.
So we're going to adapt our engine and build a new engine.
The way I think it was like ruling a new runtime engine for untrusted workloads.
It should be a place where you can go put in like a coding agent.
It needs a computer.
It should be able to go run to everything it can do with security guard.
So by default, we'll have micro VMs here.
So if you go look, we have our initial start of this style
should have just come out.
If you look at Docker Sandboxes, we'll come up with,
you can do Docker Sandbox on Claude, spins up Cloud in a micro VM.
Around that, we have network proxies that are outside the VM.
So when it tries to go outside of the network,
we run through proxy layer.
We have a credential layer here where you don't have to give it your credentials,
But if you just start to GitHub, we injects the potential outside.
So the agent doesn't need to know your secure content of data.
You control files that has access to, control where it goes, but let it run.
Then for MCP, the server is adding our MCP security.
We have a trusted content, trusted registry of MCP servers.
These are MCP servers that we vetted.
We run through security hardening here.
We're also building DHS version of these and a gateway that's plugged in.
and the gateway is where we can start injecting a lot more security rules too.
And so the vision is built towards a secure runtime for untrusted workloads,
you know, folks and coding agents.
And we'll make this work both locally and remotely and give you the same thing.
So you can be working locally, but we'll have a cloud.
That'll be coming out soon.
And we actually have some early partners who are working on this already.
But as as a developer, I should just start in our Docker sandbox.
I get all the productive benefits.
I run my agent unfettered.
I get all the security benefits.
I'll get cloud.
And for enterprise, they can manage governance around all of this stuff.
And then we tie supply chain security in here.
As we're doing this, create by default, build on the secure content we've talked about.
Get that in here, et cetera, et cetera.
Let me pause.
This is like a big, big push for us.
And I think like sort of, you know, what should the next?
Well, I want to say, which is the next five years of the doc look like, but I'm talking to AI.
So let's just sit the next year for a minute.
Yeah.
We'll see what happens after that.
Right.
Well, it does make sense because I'm a, I'm kind of a, a dash, dash, dangerously skip permissions kind of guy, you know?
So when I run claw, that's where I go.
I'm just tired of like saying yes.
You know what I'm saying?
So I just, in a way, yolo.
I'm not yellowing on production stuff, but on like little tinker things.
But that's dangerous.
Obviously, it's got the word in the flag for a reason.
And so what you're saying is in this future world where we may be going to in this next year around AI, I can do that in a way that is less dangerous because it's containerized.
It's compartmentalized.
It's in its own, you know, micro VM or a sandbox.
And so the danger is really just micro VM danger, not Adams MacBook Pro danger where I can, I've seen this.
Well, there's a hacker news article out on this.
I believe something like deleted there.
the agent deleted their entire machine.
You know, that's a possibility when you live
dashed dangerously.
Yeah, there's a ton here.
So just to, you know, paint the picture a bit.
So there's the VM isolation,
and we can do that and that protect your home directory.
Your directories a bit.
But it's more than that, right?
So we can talk about just file for a minute.
Well, you want to protect stuff.
But now, well, what if you do actually need it to read some folder?
How do you manage that?
So this is, it's different from like,
containers, because containers you think of
like package and you static
and then scale them out here. I'm talking like a runtime
that actually needs to be dynamic. That's the nature
of the thing. But I still want security
I want security statements to be true about it.
So by default,
limited access. We want to go out,
we can decide what we approve, what we don't,
let it run. And when it does
control rights to sensitive areas,
when it wants to write out to the network,
it's not just, they have to put a gate there to
anything wants to do outside this box.
We put a gate. Great, we'll have a network
proxy. That can control not just where it goes, but over time we'll add in deeper rules here
that isn't doing something dangerous. And we can work at various levels of the stack as we do that.
Credentials, don't give this thing credentials. You want to pull from GitHub? That's not put that in
the agent. Put that outside. And agent can just try to talk to GitHub and we'll inject the credentials
as a minute. There's a ton of this stuff you've to do. And I don't know if you followed the
plot bot stuff. I feel like I started, does I start on Saturday or something? And then it's
pretty much.
Yeah, so I've been using it.
And then it's just like insane to follow the speed.
And then I think yesterday there was, I forget the name of the person,
wrote like a deep security article of how he used Skill Hub to get him to install
the skill and like assultated, like his white hat showed how he could.
It just goes to show like what's happening here is like the potential is tremendous.
We all want to use it.
But the security concerns are very, very real.
This is sort of the core of what I think.
like our job is like enable devs to be deeply productive and add security.
And just this is now a space where like this is always been true,
but it's even more true.
It's more clear now because the productive benefits and security threats are just like,
you know, 100x what they used to be.
Right.
Everything's faster, more pervasive.
Yes.
We're just flying by the seat of our pants.
And the stakes are high.
And I think it's like a perfect moment for Docker because like when you think about
isolating, isolated, secure workloads.
I feel like, you know, is Docker, if not Docker, who else?
Star Green's Daygreen.
Just trying to say it, you know.
A couple more times here.
Yeah.
And so that's what we'll do local and cloud also.
And I think cloud's critical for us because as you said doing this stuff, I don't know,
I was thinking about you go recently.
I've got to the point now where like, I don't know,
cloud can run for like a couple hours for me and produce good quality code with
tests and everything.
I want to, I want that to run off somewhere else.
so I can close my laptop and go to something else,
and I want 60s at a time.
But I also want security as to the cloud,
and I want easy DX between local and cloud,
because I want to dig into it.
And so that's the other part of the engine
we're building now,
where the engine will run locally and remotely,
and we can get security and seamless,
local remote experience for you.
I want to have to get the IDE back into play,
but I'm not talking about the actual ID,
I'm talking about the AIDE.
AI development environment.
That's what we need.
Yes.
I mean, you had that for the cloud development environment, CDE, right?
Yes.
Let me take your machine away from you.
Let me put in the cloud.
Let me attach your IDE and or VIM or whatever you want to rock because it's just how it works.
Yep.
You know, we need an AIDDE out there where you can just develop in the AI and close your laptop and move along and let it just keep churning.
That's your journey, right?
And then be able to put it back when you need to know you have schedule here when you need it.
And so, yeah, this is like, yeah,
so it's like adapting Docker to be the,
to be the engine across the entire STLC for Air 4-C.
And so that's on the development side.
And then we're going to start doing stuff across the rest of the SLC too.
We've got to build security agents as we go and Docker building,
since you see I left, working all these problems here.
The goal is really help you securely, actually,
help develop a security, technology, use agents,
and then help realize that and actually get their shipped out
and know that it's safe and secure as you do that.
I definitely think this maneuver you've made with hardened images,
I wouldn't say it cements it necessarily.
It certainly puts the cement down and it's hardening
to just go back to the play on words here.
Because, I mean, I can't imagine deploying software to production
in a cloud that isn't in a container.
I can't imagine, it just doesn't compute for me that way.
Now, I am a system D kind of guy, so I do yellow here in my home lab.
So I don't really do a lot of Docker stuff in my home lab.
Sometimes, especially if I'm pointing somebody else's application in because they've already done the work.
They've containerizer for me.
If I'm doing my own thing here, I'm usually system Ding it and I'm usually just running it bare metal because it's a VM.
It's in progress box.
It's my home lab.
It's not high stakes.
And it just gives you one less layer between me and the actual machine itself.
I've already got hypervisor virtual.
machine, then Docker, then,
let's say, come on, I don't need all those layers.
That being said, I think that
your intuition is right. I think what you're doing in this
maneuver has
strengthen your position, has strengthened your
trust for me. And I think it just
solidifies the fact that Docker is here
to stay and containers
are the way. Yeah, maybe
like, you know, another PTHCPEN
maybe, like you know, it's like build ones
runs anywhere, sort of like Docker, not a build one
runs anywhere, trust always something.
Like where I didn't trust across
the board. And that's true for AI. And then you're right, like,
the DHA is like the same ethos, right, coming through of like trust,
security is critical, um, just critical across your entire software life cycle.
Well, Tashar, you are the head of engineering over there. Anything left to say about what
you do, what you're doing, what you're going to do before we close things out.
No, maybe I'll just, you know, wrap up, which is, um, the thing about like, you know,
sort of where we headed, what we're doing.
If you're successful and have high confidence,
it will be then like everyone who's, one,
I think the entire engineering world,
every engine is going to move to using coding agents.
It's just what's going to happen.
I think all that should run in a new secure runtime
for agents using the secure supply chain base that we have.
So you use the runtime, basically adapting our engine
and our content for the AI world.
And I think that is the thing what everyone needs.
And I think we're at a position to do that.
And all you there.
So I think that's the core focus for us
is believe in like, you know,
solve for the world of people using coding agents.
But as we do that, I think we're also building
the platform that people need for running any agent.
So that'll be the next phase once you get through this,
that any agent is running of any kind
should run on this layer here.
That's we're focused on doing.
Honestly, it's like a super, you know,
it's a really fun, I feel lucky to have this gig.
It's a fun gig to be in the middle of all the change
in software world and to be at a place like Docker that's doing this, right?
Yeah.
It's so critical and so core.
It's a really fun time to be a developer because there's so much change.
It's also quite scary this change.
You know, there's a lot of, there is some uncertainty.
While there's so much potential, there's also so much uncertainty and so much pause,
but also so much not pause.
Yeah.
I mean, it's such a, such a conundrum, really, to think about the state of things.
But like everything is literally changing.
We didn't expect this to be where it's at a year ago.
Like a year ago, the conversation was this direction, but not where it's at currently.
And it's clear that everything from the bottom up is changing about how we develop software,
how we deploy software, how we have to secure it, et cetera.
It's a fun time because we get to rebuild it all.
So if you're an old hat, you're like, okay, that sucks.
I don't want to learn new stuff.
But if you're a new hat, you're like, sweet, let's build some cool stuff.
But it is a wild ride we're on right now.
I mean, everything, everything is changing.
It is a wild ride.
Also, at least for me, like, you know, look, I struggled to answer.
Like, hey, where will engineering be and like, where will software development be in two years?
I don't know.
I can estimate it.
I can guess a bit.
But right now, what's really fun is like, fundamentally, I think people become engineers to, like, build stuff, like, solve problems and innovate.
And like coding has been a key way to do that.
We just get to like do that at like 10x, 100 X the throughput now.
It's just fun.
I can like have an idea and go.
And that's just that's just fun.
Couldn't say it by myself.
Deutra, thank you so much for sharing time with us, doing this hard work,
leading the charge and being cool.
Thank you.
Thank you all.
It's a lot of fun.
Good talk with you.
All right.
That is your change log interview for this week.
We hope you enjoyed it.
And we have a members-only bonus segment for our ChangeLog Plus Plus People.
You get an extra 10 minutes of us talking Ralph, talking OpenClaw, and talking role changes.
It's sort of the skills of like deck leads and deck managers and PMs, right?
We are the PMs now.
We are the PMs now.
We've always been the PMs.
Join today at changelog.com slash plus plus, it's better.
Thanks again to our partners at Flightout Island.
to our B-freakin residence, Breakmaster's Cylinder, and to you for listening.
We appreciate you hanging out with us each week.
That's all for now, but we'll be back on Friday with Amel Hussein talking career renaissance,
aerospace, my return to the blogosphere, and more.
Looking forward to it, and talk to you then.
