Screaming in the Cloud - The New Docker with Donnie Berkholz
Episode Date: June 3, 2021About DonnieDonnie is VP of Products at Docker and leads product vision and strategy. He manages a holistic products team including product management, product design, documentation & ana...lytics. Before joining Docker, Donnie was an executive in residence at Scale Venture Partners and VP of IT Service Delivery at CWT leading the DevOps transformation. Prior to those roles, he led a global team at 451 Research (acquired by S&P Global Market Intelligence), advised startups and Global 2000 enterprises at RedMonk and led more than 250 open-source contributors at Gentoo Linux. Donnie holds a Ph.D. in biochemistry and biophysics from Oregon State University, where he specialized in computational structural biology, and dual B.S. and B.A. degrees in biochemistry and chemistry from the University of Richmond.Links:Docker: https://www.docker.com/Twitter: https://twitter.com/dberkholz
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by Thinkst.
This is going to take a minute to explain, so bear with me.
I linked against an early version of their tool, canarytokens.org, in the very early days of my newsletter. And what
it does is relatively simple and straightforward. It winds up embedding credentials, files, that
sort of thing, in various parts of your environment, wherever you want to. It gives you fake AWS API
credentials, for example. And the only thing that these things do is alert you whenever someone
attempts to use those things. It's an awesome approach. I've used
something similar for years. Check them out. But wait, there's more. They also have an enterprise
option that you should be very much aware of. Canary.tools. You can take a look at this,
but what it does is it provides an enterprise approach to drive these things throughout your
entire environment. You can get a physical device that hangs out on your network
and impersonates whatever you want to.
When it gets NMAP scanned or someone attempts to log into it
or access files on it, you get instant alerts.
It's awesome.
If you don't do something like this,
you're likely to find out that you've gotten breached the hard way.
Take a look at this.
It's one of those few things that I look at and say,
wow, that is an amazing idea. I love it. That's canarytokens.org and canary.tools. The first one is free. The second
one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.
This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless,
you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery.
Lumigo helps make sense of all of the various functions that wind up tying together to build applications.
It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and
microservices environment. You've created more problems for yourself? Make one of them go away.
To learn more, visit Lumigo.io. Welcome to Screaming in the Cloud. I'm Corey Quinn.
Today, I'm joined by Donnie Burkholz, who's here to talk about his role as the VP of Products at
Docker, whether he knows it or not.
Donnie, welcome to the show. Thanks. I'm excited to be here.
So the burning question that I have that inspired me to reach out to you is fundamentally and very
bluntly and directly, Docker was a thing in, I want to say, the 2015-ish era where there was someone who gave a parody talk for five
minutes where they got up and said nothing but the word Docker over and over again in a bunch
of different tones and everyone laughed because it seemed like for a while that was what a lot
of tech conference talks were about 50% of the way. It's years later now, it's 2021 as of the
time of this recording. how is Docker relevant today?
Great question, and I think one that a lot of people are wondering about.
The way that I think about it, and the reason that I joined Docker about six months back now,
was, yeah, I saw the same thing you did in the early kind of 2010s, 2013 to 2016 or so.
Docker was a brand new tool, beloved of developers and DevOps engineers
everywhere. And they took that, gained the traction of millions of people, and tried to pivot really
hard into taking that bottom-up of source traction and turning it into a top-down kind of sell to the
CIO and the VP, operations, orchestration, management, kind
of classic big company approach.
And that approach never really took off to the extent that we'll let Docker become an
explosive success commercially in the same way that it did across the open source community
and building out the usability of containers as a concept. Now, new Docker, as of November 2019,
divested all of the top-down operations production environment stuff to Mirantis
and took a look at what else there was. And the executive staff at the time, the investors thought
there might be something in there. It's worth making a bet on the developer-facing parts of Docker to see if the things that built the developer love in the first place were commercially
viable as well. And so looking through that, we had things left like Docker Hub, Docker Engine,
things like Notary, and Docker Desktop. So a lot of the direct tools that developers are using
on a daily basis to get their jobs done
when they're working on modern applications,
whether that's 12-factor,
whether that's something they're trying to lift
and shift into a container,
whatever it might look like,
it's still used every day.
And so the thought was,
there might be something in here.
Let's invest some money,
let's invest some time
and see what we can make of it
because it feels promising.
And fast forward a couple of years, we're now early 2021. We just announced our Series B investment because the past year has shown that there's something real there. People are using
Docker heavily. People are willing to pay for it. And where we're going with it is much higher level than just containers or just registry.
I think there's a lot more opportunity there. When I was watching the market as a whole drifting
toward Kubernetes, what you can see is to me it's a lot like a repeat of the old OpenStack days
where you've got tons of vendors in the space, it's extremely crowded, everybody's trying to
sell the same thing
to the same small set of early adopters
who are ready for it.
Whereas if you look at the developer side of containers,
it's very sparsely populated.
Nobody's gone hard after developers
in a bottom-up self-service kind of way
and help them adopt containers
and help them be more productive doing so.
So I saw that as a really compelling opportunity
and one where I feel like
we've got a lot of runway ahead of us. Back in the early days, there's a bit of a history lesson
that I'm sure you're aware of, but I want to make sure that my understanding winds up aligning with
yours, is Docker was transformative when it was announced, I want to say 2012 in Santa Clara,
but don't quote me on that one. And effectively what it promised to solve was, I mean,
containerization was not a new idea.
We had that with LPARs on mainframes way before my time
and it's sort of iterated forward ever since.
What it fundamentally solved
was the tooling around those things
where suddenly it got rid of the problem of,
well, it worked on my machine
and the rejoinder from the grumpy ops person,
which I very much was,
was great, then back up your email
because your laptop's about to go to production.
By having containers, suddenly you would have
an environment or an application
that was packaged inside of a mini environment
that was able to be run basically anywhere.
And it was right once deployed
basically as many times as you want.
And over time, that became incredibly interesting,
not just for developers, but also for folks who were trying to migrate applications. You can stuff basically anything into a container. Whether you should or not is a completely separate conversation that I am going to avoid by a wide margin. Am I right so far in everything that I have said there?
Yep, absolutely. Awesome. So then we have this container runtime that handles the packaging piece.
And then people replaced Docker in that cherished position in their hearts, which is the thing that they talk about even when you beg them to stop, with Kubernetes, which is effectively an orchestration system for containers, invariably Docker.
And now people are talking about that constantly and consistently.
If we go back to looking at similar things in the ecosystem, people used to care tremendously
about what distribution of Linux they ran. And then, well, okay, if not the distro,
definitely OS wars of, is this Windows or is this a Linux workload? And as time has gone on,
people care about that less and less, where they just want the application to work. They don't
care what it's running in under the hood. And it feels like the container runtime has gotten to
that point as well. And soon, my belief is that we're going to see the orchestrator slip below
that surface level of awareness of things people have to care about. If for no other reason than if you look at Kubernetes today,
it is fiendishly complicated,
and that doesn't usually last very long in this space
before there's an abstraction layer built that compresses all of that
into something you don't really have to think about,
except for a small number of people at very specific companies.
Does that in any way change the, I guess, the
relevance of Docker to developers today? Or am I thinking about this the wrong way, but viewing
Docker as a pure technology instead of an ecosystem? I think it changes the relevance of Docker much
more to platform teams and DevOps teams, as much as I wish that wasn't a word or a term,
operations groups that are running the Kubernetes environments or that are running
applications at scale and production, where maybe in the early days they would run Docker
directly in prod. Then they moved to running Docker as a container runtime within Kubernetes.
And more recently, the core of Docker, which was ContainerD as a replacement
for that overall Docker, which used Docker shim.
So I think the change here is really around
what does that production environment look like?
And where we're really focusing our effort
is much more on the developer experience.
I think that's where Docker found its magic
in the first place,
was in taking incredibly complicated technologies
and making
them really easy in a way that developers love to use. So we continue to invest much more on
the developer tools part of it, rather than what is the shape of the production environment look
like? And how do we horizontally scale this to hundreds or thousands of containers? Not interesting
problems for us right now. We're much more looking at things like how do we keep it simple for developers so they can focus on a simple application? But it is an application
and not just a container. So we're still thinking of moving to things that developers care about,
right? They don't necessarily care about containers. They care about their app.
So what's the shape of that app and how does it fit into the structure of containers? In some
cases, it's a single container. In some cases, it's a single container.
In some cases, it's multiple containers.
And that's where we've seen Docker Compose pick up as a hugely popular technology.
You know, when we look at our own surveys,
when we look at external surveys,
we see on the order of, you know,
two-thirds of people using Docker
or using Compose to do it,
either for ease of automation and reproducibility
or for ease of managing an application
that spans across multiple
containers as a logical service rather than, you know, try and shove it all in one and hope it
sticks. I used to be relatively, I guess, cynical about Docker. In fact, one of my first breakout
talks started life as a lightning talk called Heresy in the Church of Docker, where I just
came up with a list of a few things that were challenging and didn't fully understand.
It was mostly jokes, and the first half of it
was set to the backstory of an embarrassing
chocolate coffee explosion that a boss of mine once had.
And that was great.
Like, what's the story here?
What's the relevance?
Just a story of someone didn't understand
their failure modes of containers in production.
Cue laugh.
And that was great. And someone came up to me and said, hey, can you give the full
version of that talk at ContainerCon? And to which my response was, there's a full version?
Followed immediately by, absolutely. And it sort of took life from there. Now, I want to say that
talk hasn't aged super well because everything that I highlighted in that talk has since been
fixed. I was just early in being snarky,
and I genuinely, when I gave that first version, didn't understand the answers. And I was expecting
to be corrected vociferously by an awful lot of folks. Instead, it was, yeah, these are challenges,
at which point I realized, holy crap, maybe everyone isn't 80 years ahead of me in technical
understanding. And for better or worse, it set an interesting
tone. Absolutely. So what do you think people really took out of that talk that surprised you?
The first thing that I think that, from my perspective, that caught me by surprise was
that people are looking at me as some sort of thought leader, their term, not mine. And my
response was, holy crap, I'm not a thought leader. I'm just a loud
white guy in tech. And yep, those are pretty much the same thing in some circles, which is its own
series of problems. But further, people were looking at this and taking it seriously as in,
well, we do need to have some plans to mitigate this. And there are different discussions that
went back and forth with folks coming up with various solutions to these things. And my first awareness, at least, that pointing out problems where you don't know the answer
is not always a terrible thing. It can be a useful thing as well. And it also let me put a bit of a
flag there as far as a point in time, because looking back at that talk, it's naive. I've done
a bunch of things since then with Docker. I mean, today I run Docker on my overpowered Mac to have a container that's listening with rsyslog, and I have a bunch of devices around the house
that are spitting out their logs there. So when things explode, I have a rough idea of what
happened. It solves weird problems. I wind up doing a number of deployment processes here for
serverless nonsense via Docker. It has become this pervasive technology that if I were to take an
absolutist stance of, oh, Docker is terrible, I'm never going to use Docker, it's still here for me and it's
still available and working.
But I want to get back to something you said a minute ago, because my use of Docker is
very much the operations sysadmin with titled inflation, whatever we're calling them this
week, that use case in that model.
Who is Docker viewing as its customer today? Who as a company
are you identifying as the people with the painful problem that you can solve?
For us, it's really about the developer rather than the ops team. And specifically,
I'll say the development team. And this to me is a really important distinction
because developers don't work in
isolation. Developers collaborate together on a daily basis. And a lot of that collaboration is
very poorly solved. You jump very quickly from I'm doing a remote pairing in my code editor to
it's pushed to GitHub and it's now instantly rolling into my CI pipeline on its way to
production. There's not a lot of intermediate ground there.
So when we think about how developers are trying to build,
share, and run modern applications,
I think there's a ton of white space in there.
We've been sharing a bunch of experiments.
For anybody who's interested,
we do community all-hands every couple of months
where we share, here's some of the things we're working on.
And importantly to me, it's focused on problems everything you know you were
describing in that heresy talk was about problems that exist and pointing out problems and those
problems for us when we talk to developers using docker those problems form the core of our roadmap
the problems we hear the most often as the most frustrating the most painful guess what those are
the things we're going to focus on as great opportunities for us and so we hear people
talking about things like they're using Docker or they're using containers,
but they have a really hard time finding the good ones and they can't create good ones. They are
just looking for more guidance, more prescription, more curation to be able to figure out where is
this good stuff amidst the millions of containers out there? How do I find the ones that are worth
using for me as an individual, for me as a team and for me as a company? I mean, all those have different levels
of requirements and expectations associated with them. One of the perceptions I've had of the DevOps
movement is someone who started off as a grumpy Linux systems administrator is the sense that
they're trying to converge application developers with infrastructure engineers at some point.
And I started off taking a very,
oh, I'm not a developer, I don't write code.
And then it was, huh, you know,
I am writing an awful lot of configuration,
often in something like Ruby or Python.
And of course, now it seems like everyone
has converged as developers with the lingua franca
of all development everywhere, which is, of course, YAML.
Do you think there's a divide
between the ops folks and the application developers in 2021? You know, I think it's a
long journey. Back when I was at RedMonk, I wrote up a post talking about the way those roles were
changing, the responsibilities were shifting over time. And you step back in time, and it was very
much, you know, the developer owns the dev stack, the local stack, or if there's a remote developer environment, they're 100% responsible for it.
And the ops team owned production, 100% responsible for everything in that stack.
And, you know, over the past decade, that's clearly been evolving.
And when I think about it, it's been rotating.
And, you know, first we saw infrastructure teams, ops, take more ownership for being a platform.
A lot of cases either guided by the emerging infrastructure automation config management tools,
like CF Engine back in the early 90s, which turned into Puppet and Chef,
which turned into Ansible and Salt, which have now continued to evolve beyond those.
A lot of those enabled that rotation of responsibilities where infrastructure could be a platform rather than an ops team that had to take ownership over all production.
And that was really, to me, it was ops moving into a development mindset and development capabilities and development skill sets. teams were starting to have the ability to take over ownership for their code running into production without having to take ownership over the full production stack and all the complexities
involved in, you know, the hardware and the data centers and the colos or, you know, the public
cloud production environments, whatever they may be, they could still own their code in production
and get that value out of understanding how that was used, the value out of fast iteration cycles,
without having to own it all everywhere all the time and have to focus their time on things that
they had really no time or interest to spend it on. So those things have both been happening to me
not in parallel quite. I think DevOps, in terms of ops learning development skill sets and applying
those, has been faster than development
teams who were taking ownership for that kind of full life cycle and that iteration all the way to
production and then back around. Part of that is cultural in terms of what developer teams have
been willing to do. Part of it is cultural in terms of what the old operations teams now becoming
platform engineering teams have been willing to give up and their willingness to sacrifice control.
You know, there's always good times like PCI compliance.
And how do you fight those sorts of battles?
So there's a lot of barriers in the way.
But to me, those have been all happening alongside time shifted a little bit, and then really the core of it was as those two groups become increasingly similar in how they think and how they work,
breaking down more of the silos in terms of how they collaborate effectively
and how they can help solve each other's problems instead of really being separate worlds.
This episode is sponsored by ExtraHop.
ExtraHop provides threat detection and response for the enterprise, not the starship.
On-prem security doesn't translate well to cloud or multi-cloud environments,
and that's not even counting IoT.
ExtraHop automatically discovers everything inside the perimeter,
including your cloud workloads and IoT devices,
detects these threats up to 35% faster, and helps you act immediately.
Ask for a free trial of Detection and Response for AWS today at extrahop.com slash trial.
Docker was always described as a DevOps tool. And, well, what is DevOps? Oh, it's about breaking down the silos between developers and the operations folks. Cool. Great. Well, let's try this. And I used to run DevOps
teams. I know, I know. Don't email me. When you're picking your battles, team naming is one of the
last ones I try to get to. But then we would, okay, I'm going to get this application that is
in a container from development. Cool. Don't look inside of it. It's just going to make you sad.
But take these containers and put them into production, and you can manage them regardless of what that application is actually doing. It felt like it wasn't so much breaking down a wall as it was giving a mechanism to hurl things over that wall. Is that just because I worked in terrible places with bad culture? If so, I don't know that I'm very alone in that, but that's what it felt
like. It's a good question. And I think there's multiple pieces to that. It is important. You
know, I just was rereading the Team Topologies book the other day, which talks about the idea
of a team API and how do you interface with other teams as people, as well as the products or
platforms that are supporting. And I think there is a lot of value in having the ability to throw things over a wall or
down a pipeline, however you think about it, in a very automated way rather than going
off and filing a ticket with your friendly ITSM instance and waiting for somebody else
to take action based on that.
So there's a ton of value there.
The other side of it, I think,
is more of the consultative role rather than the take work from another team and then go do another
thing with it and then pass it to the next team down and then, you know, so on unto eternity,
which is really how do you take the expertise across all those teams and bring it together
to solve the problems when they affect a broader radius of
groups. And so that might be when you're thinking about designing the next iteration of your
application, you might want to have somebody with more infrastructure expertise in the room,
depending on the problems you're solving. You might want to have somebody who has a really
deep understanding of your security requirements or compliance requirements if you're redesigning
an application that's dealing with credit card data.
All those are problems that you can't solve in isolation.
You have to solve them by breaking down the barriers because the alternative is you build
it and then you try and release it.
And then you have a gatekeeper that holds up a big red flag, delays your release by
six months.
You can go back and fix all the crap you forgot to do in the first place.
While we're on the topic of being able to, I guess, use containers as sort of these agnostic
components, I suppose, and the effects that that has, I'd love to get your take on this idea that
I see that's relatively pervasive, which is I can build an application inside of containers,
and that is, let's be clear, that is the way an awful lot of containers are being built today. If people are telling you otherwise, they're wrong. And then
just run it in any environment. You've built an application that is completely cloud agnostic,
and what cloud you're going to run it in today, or even your own data center, is purely a question
of either, what's the cheapest one I can use today? Or what is my mood this morning?
And you press a button and the application lives in that environment flawlessly, regardless of what
that provider is. Where do you stand on that, I guess, utopian vision? Yeah, I think it's almost
a dystopian vision, the way I think about it, which is the least common denominator approach to portability
limits your ability to focus on innovation rather than focusing on managing that portability layer.
There are cases where it's worth doing because you're at significant risk for some reason of
focusing on a specific portability platform versus another one. But the bulk of the time to me,
it's about how do you
focus your time and effort where you can create value for your company? Your company doesn't care
about containers. Your company doesn't care about Kubernetes. Your company cares about getting value
to their customers more quickly. So whatever it takes to do that, that's where you should be
focusing as much time and energy as possible. And so, you know, the container interface is one API
of an application, right? One thing that enables you to take it to different places.
But there's lots of other ones as well.
I mean, no container runs in isolation.
I think there's some quote, I forget the author, but no human is an island at this point.
No container runs in isolation by itself.
No group of containers do either.
They have dependencies, they have interactions.
There's always going to be a lot more to it of how do you interact with other services?
How do you do so in a way that lets you get
the most bang for your buck and focus on differentiation?
And none of that is going to be from
only using the barest possible infrastructure components
and limiting yourself to something that feels like
shared functionality across multiple cloud providers or multiple other platforms.
This gets into sort of the battle of multi-cloud.
My position has been that, first, there are a lot of vendors that try and push back against the idea of going all-in on one provider
for a variety of reasons that aren't necessarily ideal.
But the transparent thing that I tend to see, or at least I believe that I see,
is that, well, if fundamentally you wind up going all in on a provider, an awful lot of third-party
vendors will have nothing left to sell you. Whereas as long as you're trying to split the
difference and ride multiple horses at once, well, there's a whole lot of painful problems in there
that you can sell solutions to. That might be overly cynical, but it's hard to see some stories like that.
Now, that's often been misinterpreted as that I believe that you should always have every workload
on a single provider of choice, and that's it. I don't think that makes sense either. I mean, I
have my email system run in G Suite, which is part of Google Cloud for whatever reason,
and I don't use Amazon's offering for the same because I'm not nuts. Whereas my infrastructure does indeed live in AWS, but I also pay for GitHub as an example,
which is also in the Azure business unit, because of course it is. And the different workloads live
in different places. That's a naive oversimplification, but in large companies,
different workloads do live in different places. Then you get into stories such as acquisitions of different divisions that are running in completely different
providers i don't see any real reason to migrate those things but i also don't see a reason why
you have to have single points of control that reach into all of those different application
workloads at the same time maybe i'm oversimplifying and i'm not seeing a whole subset of the world
curious to hear where you stand on that one.
Yeah, it's an interesting one.
I definitely see a lot of the same things that you do, which is lots of different applications each running in their own place.
A former colleague of mine used to call it best execution venue over at 451. And what I don't see or almost never see is that unicorn of the single application that seamlessly migrates across multiple different cloud providers or does the whole cloud bursting thing where you've got your on-prem or colo workload and it seamlessly pops over into AWS or Azure or GCP or wherever else during peak capacity season, like tax season, if you're
a tax company or something along those lines, you almost never see anything that realistically
does that because it's so hard to do.
And the payoff is so low compared to putting it in one place where it's the best suited
for it and focusing your time and effort on the business value part of it rather than on the cost minimization part and the
risk mitigation part of if you have to move from one cloud provider to another what is it going to
take to do that well it's not going to be that easy you'll get it done but it'll be a year and a
half later by the time you get there and your customers might not be too happy at that point
one area i want to get at is you you talk about now addressing developers, where they are, and solving problems that they have.
What are those problems?
What painful problem does a developer have today as they're building an application that Docker is aimed at solving?
When we put the problems that we're hearing from our customers into three big buckets, we think about that as building, sharing, and running a modern application.
There's lots of applications out there.
Not all of them are modern.
So we're already trying to focus ourselves into a segment of those groups
where Docker is really well-suited
and containers are really well-suited to solve those problems
rather than something where you're kind of forklifting it in
and trying to make it work to the best of your ability.
So when we think about that,
what we hear a lot of is three common themes.
Around building applications,
we hear a lot about developer velocity,
about time being wasted,
both sitting at gatekeepers,
but also searching for good, reasonable components.
So we hear a lot of that around building applications,
which is give me developer velocity,
give me good, high-trust content, help me create the good stuff so that when I'm publishing the app, I can easily
share it and I can easily feel confident that it's good. And on the sharing note, people consistently
say that it's very hard for them to stay in sync with their teams if there's multiple people
working on the same application or the same part of the code base. It's really challenging to do that in anything resembling a real-time
basis. You've got the repository, which people tend to think of, whether that's a container
repository or whether that's a code repository, they tend to think of that as, I'm publishing
this. But where do you share, where do you collaborate on things that aren't ready to
publish yet?
And we hear a lot of people who are looking for that sort of middle ground of how do I keep in sync with my colleagues on things that aren't ready to put that stamp on where I feel like
it's done enough to share with the world. And then the third theme that we hear a lot about is
around running applications. And when I distinguish this against old Docker,
the big difference here is we don't wanna be
the runtime platform in production.
What we wanna do is provide developers
with a high fidelity consistent kind of experience,
no matter which environment they're working with.
So if they're in their desktop,
if they're in their CI pipeline,
or if they're working with a cloud hosted developer
environment or even production, we wanna provide them with that same kind of feeling experience. And so, you know,
an example of this was last year we built these Compose plugins that we call Code-to-Cloud
plugins, where you could deploy to ECS or you could deploy to ACI, right, cloud container instances,
in addition to being able to do a local compose up.
And all of that give you the same kind of experience
because you could flip between one Docker context
and the other and run essentially the same set of commands.
So we hear people trying to deal with productivity,
trying to deal with collaboration,
trying to deal with complex experiences
and trying to simplify all of those.
So those are really the big areas we're looking at
is that build, share, run themes.
What does that mean for the future of Docker?
What is the vision that you folks are aiming at
that goes beyond just the, I guess,
I'm not trying to be insulting when I say this,
but the pedestrian concerns of today?
Because viewed through the lens of the future,
looking back at these days,
every technical problem we have
is going to seem on some level like it's,
oh, it's easy, there's a better solution.
What does Docker become in 15 years?
You know, I think there's a big gap
between where people edit their code,
where people save their source code,
and that path to production.
And so we see ourselves as providing really valuable development
tools that we're not going to be the IDE and we're not going to be the pipeline, but we're going to
be a lot of that glue that ties everything together. One thing that has only gotten worse
over the years is the amount of fragmentation that's out there in developer tool chains,
developer pipelines, similar with the rise of microservices over the past decade. It's only gotten more complicated, more languages,
more tools, more things to support, and an exponentially increasing number of interconnections
where things need to integrate well together. And so that's the problem that really we're solving is
all those things are super complicated, huge pain to make everything
work consistently. And we think there's a huge amount of value there in tying that together for
the individual, for the team. Donnie, thank you so much for taking the time to speak with me today.
If people want to learn more about what you're up to, where can they find you?
I am extremely easy to find on the internet. If you Google my name, you will track down
probably 10 different ways of getting in touch.
Twitter is the one where I tend to be the most responsive.
So please feel free to reach out there.
My username is dburkholtz.
And we will, of course, put a link to that in the show notes.
Thanks so much for your time.
I really appreciate the opportunity to explore your perspective on these things.
Thanks for having me on the show.
And thanks, everybody, for listening.
Donnie Burkholtz, VP of Products at Docker. Thanks for having me on the show, and thanks, along with an insulting comment that explains exactly why
you should be packaging up that comment
and running it in any cloud provider
just as soon as you get
Docker's command line arguments
squared away in your own head.
If your AWS bill keeps rising
and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started.
This has been a HumblePod production.
Stay humble.