The Changelog: Software Development, Open Source - Docker and Linux Containers (Interview)
Episode Date: May 17, 2013Adam Stacoviak and Andrew Thorp talk about Docker, linux containers, and dotCloud with Solomon Hykes - Founder & CEO of DotCloud and the creator of Docker....
Transcript
Discussion (0)
Welcome back, everybody.
This is The Change Log.
We're a member-supported blog and podcast that covers what's fresh and what's new in open source.
This show is hosted by myself, Adam Stachowiak, and Andrew Thorpe.
Andrew, say hello.
Hey, how's it going, man?
Today's a good day, man.
It's a very, very good day. Very exciting day. Big day. Yeah, Andrew, say hello. Hey, how's it going, man? Today's a good day, man. It's a very, very good day.
Very exciting day. Big day.
Yeah, big, big day.
Big day.
You can tune into this show live every Tuesday.
That's right, every Tuesday at 5 p.m. Central Standard Time,
right here on 5x5.
You can check out our past shows at 5x5.tv slash changelog.
And this episode is number 89,
not zero dot eight dot nine.
Can you believe that?
No,
it's hash 89,
hash 89.
Oh,
inside joke.
Yeah.
And for those who've been listening to the change all forever and forever and
forever.
And for those who went and listened or,
uh,
or read today's announcement,
um,
you know,
we don't,
we don't send version our, our, uh, don't sem version our podcast.
So we dropped that, moved to 5x5, and now this is just episode 89.
And today we're joined by Solomon Hikes, a hacker, of course, and entrepreneur at.cloud.
Hey, guys. It's great to be here.
It is great.
So, Solomon, where do we begin to tell the story of DotCloud
and what you're doing with Docker?
Where do we begin?
Yeah, where do we begin?
Can we start with you, maybe?
Maybe start with you, give an intro of who you are
and kind of what you do at DotCloud?
Sure.
So I'm the founder, and DotCloud is my my baby it's been my baby for the last five
years so about five years ago i quit my job and uh set out to work on all things devops and
deployment automation in cloud and that kind of stuff um And I guess I could give you the long version of that. But, you know,
fast forward to 2010, after about two years of tinkering and running a bootstrapped consulting
business, toying with interesting technology and experimenting, eventually, we launched a product uh we launched a platform as a service
so uh you know for the if you know products like heroku or google app engine or microsoft azure
we launched a platform that lets developers upload their their web application very easily
and the platform takes care of deploying it, configuring the servers, scaling it,
and all that fun stuff, freeing the developer
that burden so that the developer can do what he does best, which is
write awesome code. So we've launched that in 2010,
and our claim to fame at the time was that
we were the first platform as a service to support multiple languages.
So at the time, Heroku was really big in the Ruby community.
Google App Engine was doing interesting stuff with Python, but you had to heavily modify your app.
You had to use their custom APIs, et cetera.
And there were a lot of developers out there who were eager to get their hands on something similar,
something that made their lives super easy with their respective languages,
and we delivered that, and there was a lot of buzz around that.
Fast forward to late 2012 and early 2013, and we're doing that.
We're growing the business, growing the user base, et cetera.
And all this time, we're kind of – we're getting more and more requests for something very specific.
We're getting a lot of people interested in the secret sauce, not the platform itself, but the ingredients to build your own platform.
Sometimes because people would not agree with the way we did pricing,
or usually they had custom needs.
You can't be the provider for every developer out there,
every company that needs to deploy an app.
That's a little bit ambitious.
Anyway, we started getting requests for the ingredients to building your own platform.
Eventually, we decided that that was a smart
move to make, and we started open sourcing stuff.
One of the you know one of the components we open sourced is a is a is a project called docker which is basically you could call
it our secret sauce but you know it's it's from a technology point of view you know you could argue
it's not that big of a deal but there's a a lot of work that goes into taking a kind of arcane and complicated technology and delivering it in a simple way to developers.
So anyway, that was kind of like the super fast-forward version.
Super fast-forward version.
Yeah.
We can go in like a thousand different directions here, but. Right. You kind of glossed over,
you kind of glossed over the fact though,
that like,
for those who may not have caught up with dot cloud and,
and with what they're doing with Docker,
I mean,
you gave a talk at PyCon not long ago,
which is kind of when a lot of this buzz began to really hype up for you.
I wouldn't say that's the start of,
of your story,
but you know,
you had this talk called the future of Linux Containers, and you wowed everybody.
Everybody was for what you delivered,
and if Kenneth was on this show right now,
he'd be saying the same things I'm sure that he was there at PyCon.
So tell us about that.
What do you mean by The Future of Linux Containers?
So what I mean by The Future of Linux Containers is,
well, I guess I should start by describing what Linux containers are.
There was this thing called virtualization.
And, you know, it lets you basically create a virtual computer.
And that had a lot of benefits for companies that operated a lot of computers because you could consolidate hardware.
And instead of spending tens of thousands
for dozens of servers,
all of a sudden you can spend much less
for a smaller number of actual computers
and pretend you had more, right?
And on top of that,
the promise of virtual machines
was that developers could package their application along with all the dependencies,
everything from the libraries you use, the app server that runs your app,
the exact version of the exact system library that your application depends on,
all the way down to the distro in the underlying system
the whole thing packaged in uh you know a single object something that you can hand to someone else
and say here run this and it's reusable and that's kind of the key to um reliable testing
um you know it's the key to component reuse between projects it's the key to making money
with your software because all of a sudden other people can pay you to use it to reuse it.
There was a lot of excitement initially around what you could do with VMs.
And that part of using VMs never really materialized because VMs aren't really – that's not really the point of VMs.
From a technology point of view, they have a few downsides.
They're big.
You know, they take a lot of disk space.
Running them uses up a lot of memory, a lot of CPU.
There's a lot of overhead.
If you've ever, you know, simulated a complex system using VMware, a virtual box, or a laptop, you know what I'm talking about.
The battery goes away really fast.
They're not really portable.
Anyway, so it never really took off as the way developers share their work.
I don't ever remember putting my code into a VM and handing it to a lot of people and saying,
hey, here's the official way to use my code, right? So we're still in this world of fragmented ways to package and share and reuse code.
And we're still in dependency hell and these kinds of problems.
Python developers have Python packages.
Ruby developers have Ruby packages.
Everyone has to deal with Ubuntu packages or Debian packages or Red Hat or etc
and then you have to compile stuff by hand sometimes
it's just a mess
and enter Linux containers
so what Linux containers are
they are the Linux kernel's answer
to this problem
or more specifically to the problem of um subdividing uh a single
system system a single os into multiple areas that are completely sandboxed from each other
so that you can run you know uh multiple applications side by side inside the same OS, running on top of the same kernel
without application A messing with application B in any way.
And if you think about iPhone or Android apps, that's kind of how they work.
Your apps never interact with each other.
They don't touch each other's files.
You can remove, you know, you can add any combination of apps. They don't interact with each other. They just don't mess with each other. And that's what Linux containers enable. And I
would add that every modern operating system, at least every modern Unix operating system,
has a facility like Linux containers.
BSD systems have a similar mechanism called Jails.
In fact, BSD fans will tell you they've had Jails for way longer
than Linux had Linux containers.
Solaris has Zones.
Is this like taking all those different metaphors for different platforms and creating like one homogeneous like uh api to all that is that
is that what the the point of doctor is that's what the point of doctor is um like so i guess
my where i was getting was um there is now the possibility of creating such an API, basically a unified format for packaging your entire app with all its dependencies, regardless of what language you used, what libraries or framework you use, as long as it runs on Linux, basically.
The only dependency is the Linux kernel.
As long as you can run on the Linux kernel, basically,
there is the possibility now of packaging your app in this unified standard format
and then be able to run it with strong guarantees
that wherever you run it, it will run in the same way.
And that's very powerful because now, again, you can hand it to someone else and say, here,
run it.
And then this other person will run it and something predictable will happen.
Right?
And that's the key to automation.
That's the key, again, to reliable testing, to things like, hey, here's an upgrade.
Now I can send you this upgrade and it works. And so again, Linux containers makes that possible
because now the Linux kernel effectively can be split up in little
sandboxed areas, but it's the raw material.
And what was missing was a tool to kind of glue
all these raw capabilities together and deliver it to developers in a package
that is usable,
that makes sense.
Right.
So,.cloud originally was taking advantage of Linux containers to kind of do this for
everyone.
Yeah.
And you decided, with all the requests that are coming out, to go ahead and build, you
know, did you extract Docker from.cloud or did you kind of build a new, so it was actually
extracted from.cloud?
So, a combination of both.
Sorry, go ahead. I'll answer after.
Well, I'll have more for you, but go ahead and kind of explain that.
Like how did Docker separate itself from.cloud?
So, yeah, so first of all, I guess to answer your preliminary question,
yes, that's definitely what we've been doing at.cloud essentially forever.
I mean, all the way back to 2008 when when DocLive was started, way before we even launched our first mass, you know, mass consumption product.
Our thing was always taking advantage of Linux containers for fun and profit.
That's basically what DocLive is. And back in 2008, it was a really, really weird thing to do. In fact, at the time, the LXC project, which is today the flagship project within the Linux kernel for all things containers,
that project didn't exist or maybe barely existed.
I forget.
It was, in any case, highly experimental and definitely not usable. What
the Linux community had at the time was
various patches. So there was one patch called vServer, which
was supposed to emulate BSD jails. Another one called OpenVZ,
which was maintained by a company called Parallels, which you may have heard of.
And so back in the day, it was highly experimental territory to use containers anyway.
But we did, and we had a lot of fun.
And eventually, we became good at it and figured out ways to plug the components together.
And then we launched.cloud on top of it.
So yes, definitely,.cloud was our way of taking advantage of these capabilities,
turning them into our kind of secret ingredient,
and using that for our advantage,
which was basically come out with a Heroku competitor that could do 10 times what Heroku could do.
Because, hey, it was completely agnostic to any language, of course,
because it was containers under the hood.
And then we started doing things like launching database services.
.cloud today has 15 different cloud services. We have a Redis service, we have a
MongoDB service, we have a MySQL service, and all of those services,
as diverse as they are, are actually under the hood,
powered and operated by a single
layer built on Linux containers. So it's actually
the same code that automates the
deployment of your MySQL database or your Rails or Python app when you're using dot cloud and and
and behind that there's an ops team that is ridiculously small you know it's like uh you
know five guys basically power these 15 cloud services, and this is all thanks to what we're able to extract out of this awesome technology that is Linux containers.
I'm simplifying it just a bit because there are other components that gravitate around it, but that's really kind of the starting point.
That you have this unit of deployment, this thing that once you've bundled it, you know what's in it,
and you can run it in a repeatable way. That's the key to everything.
Right. So fast forward four years and now based on a lot of consumer feedback or people not liking
the pricing or different reasons you said, you guys decided to pull Docker out of it.
And so why was that decision finally made?
Our reasoning was basically, well, there was a combination of factors.
The first was clearly the market has evolved.
People are, you know, the market in general is getting more sophisticated.
It's not like we're the only people in the world who know about Linux containers.
Understand Linux containers, right?
Yeah, there are lots of very smart systems engineers who are taking advantage of them.
And we started seeing popping up on the radar starting in 2012. a new generation of cloud services that started catching on and doing things that clearly
made it very obvious that under the hood they were
playing with containers as well. We thought, okay, this was our differentiating advantage,
but no differentiating advantage lasts forever.
Realistically, we're a startup. We're not a giant company with
deep pockets. We can't possibly compete with everyone on every front, right? And so let's specialize, right? Let a MongoDB provider and go after the MongoDB players?
Should we specialize in Rails and Python and JavaScript?
And, you know, we realized in the end,
our true core, our true specialty
is the underlying containers layer.
It's the underlying layer itself.
It's doing incredible things with containers.
And how do we take
advantage of that experience? The fact that
we've been using and taking advantage of containers for
many years. We have more production and
real world experience with them than most companies
in the world.
As a business, how do we take advantage of that?
And the answer is open source it to get the credit for the work we've done.
So that's the business answer.
And then I would say the engineer's answer is that, hey, that stuff's going to be open anyway. It's just too awesome not to become
an open standard, something that everybody uses and benefits from. That's just the awesome world
we live in. Through things like open source, in the end, people will get a really easy to use,
incredibly powerful open source implementation of that stuff, eventually it will become a standard.
There will be foundations around it.
It will be awesome for everyone.
So if you believe that's going to happen no matter what,
do you want to be part of, and if you can contribute to that,
like you've got something to bring to the table because you happen to know that stuff,
then do you want to be part of that awesome movement that is about to
start?
Or do you want to stay on the sidelines and say, no, I've got this closed implementation.
Mine's better, but you'll never know exactly how much better.
I'll show you the code.
Maybe you can do that if you're a big company.
And even if you're a big company, I think it's stupid.
But for a startup, it's just stupid.
Well, I mean, this show is on open source.
I mean, it makes sense, right?
It's open source, all the things, man.
Right.
And then the burden is on us to prove that we can bring value as a business.
And I think that's a great approach to business.
It's win-win all the way.
And I think we – as a business, I'm not worried at all.
First of all, we know how to run them in production, which is very hard.
I mean, you know, I'm not going to preach to you that open source is good for business.
But anyway, that –
I have a question, though.
Can I ask a question on this?
Because this is where I'm trying to find the line.
And for those listening, you definitely know I'm not a DevOps guy.
So I come to this table and talk to Solomon and Andrew about this with asking for grace because I don't know all the details here.
But when we look at Linux containers, so LXC containers out there, and then we look at Docker, what is Docker to LXC containers?
That's where I'm trying to paint the picture from.
Yeah, good question and a question I get a lot.
So you're not alone.
So I think there was an early answer and then there's a – now there's kind of a larger answer.
The early answer is LXC is the raw stuff and Docker is what makes it palatable, what makes it usable.
Someone – there's a cool blog post. Docker is what makes it palatable, what makes it usable.
There's a cool blog post describing Docker.
And the guy who wrote the blog post compares it to Git.
Using LXC is kind of like using those underlying obscure commands that actually power Git,
but no one understands how they work unless you've read like 10 pages of documentation and you know you got people telling you like oh you can build like
you can build a file system and you know you could rebuild dropbox until on top of that stuff you
know and then there are people who just want to check in source code and see diffs and merge you
know and and uh for that stuff to be possible you need a new set of commands that actually offer that level of interface.
So it's an API that makes sense.
It's a higher-level API.
It's a higher-level UI.
So that's one way to look at it.
I've heard people describe it as, and this probably is good for you too, Adam, like Docker has the potential to be for platform as a service as Chef
did for infrastructure as a service.
So it's just like a high-level API to
make it easier, less barrier of entry
to get in there and be able to do
what LXE allows you to do.
Gotcha.
That's definitely true.
It makes it accessible.
I think we're
whatever platform as a service exactly means. Let's say it's whatever offers an API, a hosted API for developers to do something like that, whether it's storing data in a database or deploying your code.
I think we're at the end of an era where there was a very high barrier to entry to actually building a pass you
know whether you were going to build a database service or build some sort of api powered service
like twilio uh or you know stripe or mailgun or whether you were going to deploy web apps like
you know dot cloud or roku uh etc there was that was a very high apps like, you know,.cloud, Heroku, et cetera,
that was a very high bar.
You had, you know, that's like very, very, very specific expertise.
You had to be kind of that perfect combo of low-level systems engineering, ops, you know,
and at the same time, you know, understanding the needs of a specific group of developers.
And that's kind of a hard combo.
And I think now we're entering a phase where all that stuff is being democratized.
Thanks to things like Docker, it's actually easier to build a pass that answers a very specialized need in the developer community
without having to reinvent the wheel and become, you know,
a world-renowned expert in, you know,
load balancing between multiple EC2 regions and, you know,
24-7 monitoring and, you know, change management and log collection and metrics collection and all that stuff.
Right, so I want to interject.
You said that kind of your claim to fame early on at DocCloud was because you were leveraging the Linux containers,
you were able to basically be framework independent, language independent, and of that mattered. I remember early on, maybe not early on, but I remember
the Heroku and Engine Yard for Ruby and then NodeJS came out for Node.js.
All these specialized services came out for these
one framework or one language, one environment.
So you came out. So now, fast forward a few years,
most of those guys are supporting multiple environments. So does that mean that, would you consider yourself
kind of a trendsetter in a way where now a lot of those guys are probably doing the same thing,
right? Using, leveraging Linux containers to do this? Yep. That's definitely true. I think the,
you know, PaaS is a really young market.
No one really knows how it works or how to make huge money with it because no one has.
Like no one has won in PaaS.
There's no giant success story in PaaS yet.
Everyone is figuring it out, including us.
So that's a warning before I say anything.
It gets me in trouble.
I think there was a trend where you started from the needs of a group of developers
that you knew really well. You focused. You attacked the vertical.
That's kind of startup 101. You're Heroku. You're part of the Ruby
community. These guys were a Rails development shop.
They knew the needs of the Rails community better than anyone. And they built a product
for the Rails community and it worked. And Node.J2 did the same thing for the Node.js community, etc.
And as that evolved,
people started realizing, hey, actually all these people,
all these developers who are part of different communities actually
all have jobs or eventually will get jobs in companies that are not actually organized as highly verticalized tribes.
There are very few companies that define themselves as no-JS companies or Ruby companies, including maybe a few fringe startups that are not actually the reality of the rest of the world.
Most companies have a real complicated, horrible mix of lots of different stuff running on lots of different technologies
with really, really overworked people trying to kind of plug it all together.
And every time these guys hear about a new language, they're like, you know so on the one hand you got developers you said you got like five
people running your uh your devops team right i mean yeah but we so what i'm talking about is the
companies that that we run applications for oh these guys have developers writing apps and all
sorts of crazy languages and using crazy databases.
And they've got the old – they've got the ColdFusion stuff running on a server under a desk somewhere.
Then they've got the Java Enterprise apps.
Then they've got the Ruby and Rails apps.
Now they've got the mobile apps with the Node.js backend and God knows what else.
And they're trying to make sense of all that. And so these guys are very interested in a platform that is agnostic,
that can run whatever they happen to need to run and give them a freaking unified view of what's going on.
They want the logs for the Node.js app and the Ruby app
and the MongoDB database all in one freaking place so they can know what's going on.
And they'll pay good money for that.
And that's basically, that was our premise as a business.
So it's bringing all these different components of somebody's app
or infrastructure under one roof.
Yeah, so that's the holy grail, right?
You want to be the place that runs all the stuff.
You want to be the provider of, the unified provider.
And so I think that's kind of phase two of PaaS where people realized you can get a lot of developers to start playing
with you by being very specialized and simple for a specific use case, a specific language or
framework or whatever. But then eventually if you want to keep that guy, or at least, you know, keep his business as his app grows, as his business grows, etc. Or as, you know, as he
brings his colleagues in and tries to convince his boss to use you, you're going to need to be
more flexible, more customizable, you're going to need to support more things. So you're going to
need basically Linux containers. And so I think now people are
realizing that. I mean, by people, I mean the past providers. And now you're seeing more
multi-language. Yeah. So you've, and maybe this is an obvious question, but, or maybe not, I don't
know, maybe I'm not seeing it, but by open sourcing Docker, I mean that must have taken a lot of thought because you're in a way enabling your competitors to do things that you're doing privately.
So how did that decision come about?
So you know the saying?
It's probably – I don't even know.
I think no one knows who actually said it.
But the whole thing about when there's a gold rush you want to be selling the shovels.
The whole point of PaaS was that while the
web startup gold miners are mining,
you, the PaaS provider, want to be selling them the shovels.
Now all of a sudden the shovel
shops are realizing, hey, we need to expand our technology really,
really fast so that we can support cross-language and we can kind of expand our offering in
these organizations.
We can sell more stuff to these companies.
We need to sell more shovels fast.
And actually, by the way, it's getting real easy to make shovels because there's a lot of money in shovels.
So what do we do?
How do we differentiate?
Oh, that guy over there is making, I don't know, shovel-making machines.
And so that's us.
We're like, we're the best at making shovels.
Here, we'll help you make your own shovels because we're transitioning to the business of selling the machines while you make the shovels.
So you're becoming platform as a service as a service.
I hope that doesn't become a word.
But, you know, in a way, you know, we just that was the key.
That's the key bet.
It is the key bet is that this is a transition.
And, you know, this is a real economy now.
It's a real market.
Selling stuff to developers is becoming a real – it's a new market.
And when there's a market, there is now a space for specialized vendors that address that market.
So I think increasingly you're going to see that people that used to be our competitors are now more natural partners or natural customers.
Right. So the popularity of Docker just took off.
I mean, it was kind of mind-boggling to see it just explode.
And I don't know if your lightning talk at PyCon was kind of the impetus to that, but it was just crazy to see this just blow up and then it but i guess
what you really benefited from was you know i saw docker from dot cloud was blowing up so did you
see like the did you see actual boost in dot clouds you know like that was my next question
is like how did this impact business yeah it was dot cloud business booming from this too yeah so
it definitely benefits directly i mean it's it's um was from this too? Yeah, so it definitely benefits directly.
I mean it's –
Was it like a hockey stick or was it like a 90-degree turn?
I guess 90-degree is –
Like super up versus –
Yeah, 90-degree is a pretty nice hockey stick already.
It's not – so it's kind of a two-step process.
Like step one is obviously it's it's it's exposure for dot cloud
as a company and as a result we're selling more of our stuff and that's great and and and in fact
i don't know if you guys caught this but as part of this crazy um buzz which we didn't really see
coming and actually it started before we were ready if you remember the the the whole thing
was leaked uh and we had to rush to actually ship the source code ahead of schedule because it wasn't ready.
And fire the developer that leaked it.
Oh, boy.
I don't know.
I mean – and here's a funny story.
I mean on the one hand I'm saying, oh, it was leaked.
On the other hand, here I was giving a talk at PyCon. But the thing you have to understand is we were kind of cautiously, one step at a time, showing Docker to a select group of people that we knew would be interested.
And so by the time we gave that talk at PyCon, about 40 companies had seen Docker, played with Docker, were actively checking the repository, looking at the progress.
So we kind of had this kind of miniature closed but open source at the same time, if that makes sense.
And we were at – maybe I won't name names, but a lot of companies played with it and are still playing with it. And along the way we thought, hey, there's PyCon,
and we know a lot of the guys there.
Surely there are people there interested in Linux containers.
Let's just get together with a few of them.
Let's just give this obscure talk that no one will be interested in
except the Uber container geeks.
Plus it's a lightning talk.
It'll be in a back room.
There'll be like 15 people, 12 of which will actually not care.
And we'll meet two really interested people,
and then we'll add them to the private beta.
That was kind of the idea.
And in fact, you know, with the –
You got a standing ovation from everybody.
Well, no, the thing is lightning talks at PyCon happened to be a really big deal.
And it's like it's in the main room with 800 people in it or something.
And I had nothing.
I didn't have slides.
I mean, you've seen the video, right?
It was like, hello, world.
It was the least prepared talk.
I know.
I loved watching you in the video when you were like typing the and you would have to delete because you forgot part of the command.
Yeah, I was so stressed out. I was like, oh, 800 people are watching me type hello world. Great.
But they liked it, so that was great.
So the result is, of course, someone in there said, hey, I'm going to put this on Hacker News, and then there was buzz.
But there was a point in all this I kind of forgot where I was going.
Oh, that's OK.
We're talking about the impact back to your business.
I was kind of surprised they only gave you five minutes though.
Yeah, I wasn't done.
They're like – everyone is like at the edge of their seat as you're wrapping up your talk and they're kicking you off the stage because they literally gave you like four and a half maybe maybe four minutes and 15 seconds that they're being generous
and then they're booting you off the stage not because you weren't talking about something cool
but they were just so adamant about their their timeline they're running a tight ship i mean
those guys are are well organized yeah anyway it was it was you know there was a lot of cool
conversation afterwards or you know it was pygon is really nice. It's a really cool, chill conference.
It was a nice place.
I'm glad it happened there and not in a more formal trade show or something.
That would have been boring.
Yeah, so one thing that's really interesting about Docker and I guess. cloud in general is that you're using go
for some reason that that's just very interesting to me so where where did the thought from
why did you guys pick go instead of something else so and it actually gets even more interesting
when you know that 90 of the code we've written at at dot cloud since the very beginning it has
been python we're historically Python shop.
But at the same time, we've written code in various languages.
I mean, we do advocate polyglot deployment or whatever,
the possibility of using multiple languages.
So we have to at least use more than one to be credible.
So we have a few pieces in Node.js,
and we started dabbling in Go,
but nothing crazy. But you know, we liked it because we're systems guys. So we've written a lot of C and, you know, it's kind of like C, but nicer. And so what really decided it is,
you know, the very, very first versions of Docker were written in Python because they were basically a rewrite, a gradual, standard, pragmatic refactoring of the core.cloud platform,
which has been at this point in production for over two years.
So that's what happens to production systems used by real customers over many years.
Things tend to pile up.
And at some point, you need to kind of just clean things up and refactor and take advantage of the lessons learned, yada, yada, yada.
And so we started this project. And at some point, we kind of had this discussion internally about, hey, this refactoring is actually going to be limited in scope because we've got to drop it in and it has to be completely reverse compatible.
We can't just break people's applications.
I mean there's a whole process to running people's apps in production.
And so we were faced with this decision.
Do we continue with kind of a conservative, gradual rewrite?
I mean refactor.
Or do we do something more radical and
and kind of widen the loop if that makes sense kind of go off and make it a separate component
uh and and say hey you know what it's okay if it doesn't benefit the platform right away
um but then we'll have kind of free reign to to really take advantage of all the lessons
learned do something clean something nice nice, something less frustrating.
Because one of the problems when you cover so many languages and technologies is you say yes to too many feature requests and then you have to support those features forever.
So we wanted to kind of – the result is you get a lot of baggage.
So anyway,
we were really tempted by the second option,
clean and rewrite, but then how do
you avoid
the death trap of
the rewrite that never ends?
And two years later, you've never shipped,
you haven't shipped anything, all the customers
are gone because nothing's moving,
just that kind of stuff.
So the answer was let's make it an open source component.
Let's make it really, really small and concise so that it can be used on its own.
Like the first iteration can be used on its own by other people.
And then let's later circle back and plug it back into.cloud.
So have you gotten to that point?
so in some places
not on the core production platform
there are very very direct relatives
obviously
it is still a rewrite of the.cloud platform
so in a way it's kind of v2
of the core of.cloud
and new stuff is now 100% built on
Docker. Existing stuff
is being transitioned
following a pace that makes sense. A lot of our customers
they're glad that we're
getting buzz and they're happy for us, but you know,
they just want their app to run right now. So, um, anyway, so the back to the question about go,
uh, so we've made that decision of making it a rewrite and, and open sourcing it, et cetera.
And one thing we wanted to avoid was, um, cutting corners. You know, if it was going to be a clean rewrite, it had to be a real clean
rewrite with no cheating. And it's really tempting to cheat. You know, you're like, Oh,
we've sold this problem a year ago. It was really a pain in the ass to solve the first time.
Do we really need to write it a second time? I'll just copy paste that little piece over there,
you know? And so, you know, I wanted to avoid that um and there were two other reasons uh for
using go the second reason was that it's it's it has this really nice property of compiling to a
static binary which you just drop somewhere and it runs and that is just awesome uh and it you know
it's awesome because it's just really practical from an ops point of view when you've got a lot of servers to run, you have other shit to do.
You don't want to deal with dragging all sorts of dependencies
and following a 50-page tutorial and setting it up.
You just want to drop the binary, run it, good, moving on.
And it has the added benefit of being really easy to use
regardless of what your language of choice is.
And, you know, the DevOps community, the community of people who, you know,
automate the deployment of servers and deal with that kind of stuff,
the kind of people who are naturally attracted to Docker,
who are the target of Docker, are a fragmented bunch of people.
There are people who do everything in Ruby.
Obviously, there is Chef, Puppet.
There's a lot of tools around that.
There's a big Python community.
And then there's a big Java community.
And those are kind of the three main groups.
And then there's Clojure.
There's cool stuff in Clojure.
There's all sorts of cool stuff.
None of these guys, as a general rule,
will use a tool if it's written in you know if it if
it's written in the opposite language uh and not really because they don't like the language but
because uh it comes with strings attached like if if you're a ruby shop and you're you're evaluating
a python tool uh it can be a real pain in the ass to actually run that python tool because now you
got to deal with python packages and dependencies and virtual env and all these things that you're not familiar with.
And if they break, how do you go ahead and hack with them?
And so Go is a nice kind of middle ground.
It's a binary.
You drop it.
Everyone can use it.
And if you want to hack on it, it looks a lot like C.
So it's kind of neutral.
And the third reason is really just it's trendy.
And you want your project to be adopted.
So if you can give people one more excuse to play with it, then hey, you know, why not?
And it's just a really cool language.
So the biggest communities on.cloud, would you say they're Ruby, Python, and Java?
Yeah, I mean, that would be my guess.
I mean, I didn't really run a – although maybe I should, but that would be my guess. I didn't really run a – although maybe I should, but that would be my guess.
Just from experience, yeah.
It's really cool then to see – I mean if you look at the GitHub repository, you've got almost 2,000 stars and almost 200 forks.
And it's cool to see that a lot of people maybe don't know more than just the language that they work in, but obviously
Docker is a cool enough project where, I mean, people are probably even willing to learn
Go, and it's a good opportunity to learn Go if they don't know it. So it's cool to see
this growing up. So you have plans then to bring Docker back into.cloud, and do you
have any kind of time frame in where you would see that happening um well i mean the what is already happening is that 100 of everything we're doing as a company
is built on docker going forward it's 100 we've just made as soon i mean it started as an
experiment we hoped that people would like it then Then we realize, oh shit, people like it.
And then we realize, wow, this is not stopping. Not only are people using it, but people are actually actively contributing to it. There's a real community that basically, you mentioned the
forks. The forks to me are even more interesting than the stars because it means people are
actually playing with it and contributing back. I think we've got over 20 people now who are authors in the broad sense of the term.
Maybe they contributed to fix the readme, but there are really impressive people in
the Docker community.
I look at the IRC channel and I'm like, wow, where did these guys come from?
They're awesome.
I don't know them.
And so as soon as we saw that, we were like, this is bigger than even we planned.
This is what DocCloud is going to be about now.
We've made it very clear that this is not just a side project.
Going forward, DocCloud is going to be built on Docker.
Part of that is bringing it back into the existing product,
but also it's going to be building extensions to that product and maybe new products, you know, natively on top of Docker from day one.
Yeah, so is it a nice feeling like, you know, when you were a closed shop, you know, for lack of a better term, and people would come to you with feature requests and you'd have to either, you know, go through the process of, you know, debating internally if this is acceptable and then who's
going to work on it and all that. But now somebody comes to you about Docker with a feature request
and you can tell them, hey, fork it and start to work on it. Yeah, that is a nice feeling.
Although then they do it and then you have to review the code and, but it's a good problem to
have. Yeah, well, and you bring up a good point, and that's something that, you know,
open source is great, right,
because more often than not,
the community will kind of gather around it
and contribute to something that they find useful.
For a company like you guys,
how have you handled the problem of,
you know, roles in the company?
You get used to doing things a certain way,
but now you have to kind of manage this,
you know, and one thing we like to talk about on the Change Log a lot is open source sustainability. So now you have to kind of manage this, you know, and one thing we like to talk about on the changelog a lot is open source sustainability. So now you have to kind of
manage this open source project, right? And what that means is, you know, standards, code reviews,
you know, handling issues, you know, all that stuff.
Just in general being a leader.
Yeah.
Yeah. Leading the project. So internally, does the whole team kind of take responsibility for
that? Or how is that? What does that look like for you guys? Yeah, it's been, I mean, when I say we've reorganized around Docker,
I mean, we have truly reorganized in a very significant way.
It's been a big change for everyone in the team, for the company,
even in terms of strategy.
And also, I mean, for me personally, I'm the founder and CEO.
I've been kind of gradually transitioning over the last couple of years from the guy who wrote the code to the guy who wrote the code with other dudes to the guy who wrote less and less code to the guy who raised money and hired people and ran meetings and all that stuff, which is very interesting.
But I ended up in this kind of product-focused CEO. I would kind
of make calls when needed, but mostly the smart people have a tendency to do smart things on their
own. And all of a sudden, half by chance, I ended up actually being the guy pushing Docker forward
as a side project, mostly because the rest of the DocLav team didn't have time. They had more serious things to do with actually running
the real product.
And then suddenly Docker was no longer a side project. So for me personally, it's been
a huge transition because I'm the maintainer.
I review all the pull requests. Now, thank God, I have
Guillaume join as a maintainer, so he reviews and merges things.
And the pool is growing, and I'll get to what that means as a process.
But so starting with my experience personally, it's been a big change, and it's really fun.
It's good to be back to coding every day.
Maybe for another time, I'll talk to be back to coding every day. Uh, you know, I, I, maybe for,
for another time I'll, I'll, I'll talk about what it means, uh, as a CEO, mostly means just twice
as much work because. And you don't, yeah. And you didn't love the whole process of raising funds
and dealing with board members and all that. It's fun. And I, I still do it. Um, you know,
I mean, it's one way or the other that that's, that's a transition that has to be finished.
At some point in the future, it's fun to do both.
At some point in the future, I'll have to end up doing only one.
So does that mean handing over again the whole technical side once the training process of other maintainers is completed and I go back to being a CEO, or maybe I hire another CEO, who knows.
But right now, it's both, and it's fun.
For the rest of the team, what we've done is basically we've said,
okay, we've split the team in two as a start.
And I hope this is on topic, but I think it is.
It's about the sustainable open source, right?
Go for it, yeah.
We've split the team in two. Half of the team
keeping the lights on, like, okay, we've got
this existing product, we've got production apps, we've got customers.
No matter what, in this crazy period of one or
two months where people are going crazy and we don't know what to do with all this
pull requests basically falling from the sky, or two months where people are going crazy and we don't know what to do with all these
pull requests basically falling from the sky.
Let's split the team in two and half of the team keeps doing things as usual and the other
team works with Solomon and we kind of build this open source process.
And so what we've done, and this is on the open source side, the big decision we've made
which is very important and I think it's the big decision we've made, which is very important, and I think it's the best
decision we've made in this whole thing, is that we've opened
the process to contributing to Docker completely, and I mean 100%.
There is no difference in
how you contribute to Docker based
on where you work.
In other words,
the process that a dot cloud employee goes through to check code into Docker
or to influence or discuss the priorities of Docker,
the design decision,
all that stuff is 100% the same as if you're not a dot cloud employee,
which means that if you're willing and able and you got the time and you're not a.cloud employee, which means that if you're willing and able and you've got the time and you're interested,
the prospects for implication and credit and influence over the project are exactly the same.
You can be a core committer if you want to and you can, if you pass the standards and if you involve yourself enough and you know we don't yet have a core committer i mean it's only two of us who can actually merge
pull requests um and you know guillaume works at dot cloud but you know soon enough there's going
to be a core committer that doesn't work at dot cloud i'm sure of it and i mean i can see very
smart people putting a lot of energy and that that's going to be an awesome moment. And I think it's really important.
What was the process to come up with that idea though,
to have that the same process for me,
if I afforded it and wanted to contribute,
was that your idea?
Was it the team's idea?
How did you come up with that idea?
That was,
that was me.
Basically.
Me right here.
That was me.
That was me.
Yeah.
I mean,
I don't know.
You asked.
I liked that.
I liked the way you responded.
That was me. You know, mean, I don't know. You asked. No, I liked that. I liked the way you responded. That was me.
You know, basically, here's the thing.
It was kind of unusual territory because here I was kind of the maintainer of an open source project and not the guy supposed to be writing code as a day job anymore.
And we have this whole engineering team and Sam is our director of engineering.
He's got this whole process in place. We're a highly organized
company. You have to when you're running apps in production.
There was this problem of steering
resources away from the core platform. I mentioned before we split
the team in two.
In fact, you know, we split it in two, but, you know, we didn't split it in two equal parts.
You know, most of the resources have to, you know, have to stay allocated to the main product.
And at the same time, if we're really betting the farm on Docker, it needs to move fast.
And we've been really, really bent on making Docker move as fast as possible. We've shipped a lot of stuff. The only way to keep shipping fast is to get a lot of people working on it.
And the only way to get a lot of people working on it, if you can't afford to hire hundreds of
people, is to set up a process that actually makes it possible, potentially, for hundreds of people to contribute.
And we're a long way from hundreds of people checking in code, but that's the trajectory.
We're being aggressive about it.
We also said that you're going to build.cloud on top of Docker.
And right now, you have a disclaimer saying Docker is still under heavy development.
So it seems like it's stable, but maybe not as much as it possibly could be to actually build dot cloud
on top of it is that right yeah so i mean so obviously you would want to put a lot of energy
into it if you're gonna you know build dot cloud on top of docker you kind of want to get to a
point where it's even more stable and we are and. And we are. And we're gradually – I mean every day there's a little more of our resources going – as a company going towards Docker than the core product because it feeds back, right?
So it's an investment.
But the way this can – the only way this can possibly work is by really building a real community of people who are outside of the company and actually own the project with us, if that makes sense.
And about production readiness, you're definitely right.
It's not, you know, you can't run an application in production on Docker.
Actually, it turns out you can because I found out that at least one company does.
But, you know, hey, if it works for them,
it will be production-ready soon.
And the other thing also is that
Docker is a great development and testing tool.
So a lot of people actually use Docker
to develop and test in an automated way.
And then there are ways to, you know,
you can still take the result of your work,
you know, take your Docker containers and export them into any environment that you actually use in production.
So there's a bridge there.
Docker doesn't actually need to be entirely production ready all across the entire lifecycle of your app to be useful.
You can use it on a segment of that lifecycle.
Does that make sense?
So you can start using it as a dev and build tool,
and if you get more comfortable and you feel like it's ready,
you can get a good feel for it,
you can start using it the next stage, which is usually QA,
continuous delivery, things like that.
And then if you're even more comfortable, then eventually you can say, hey, you know what, I'm going to run this
in production or production for the small app and not for the big app yet.
It's a gradual process.
It starts with the
day one of development and then it moves along then it matures along with the application.
So the only supported distros are – it looks like the latest Ubuntu's.
Is that right?
Officially supported?
Yeah.
So I guess there are two answers.
Yeah, the only officially supported distro today where you can drop that Docker binary and run it is Ubuntu.
But there are officially supported install instructions for going from a Mac laptop to a running Docker setup,
a Windows machine to running Docker setup,
and any other Linux distro to running Docker setup.
And usually that means going through,
deploying a VM, right?
So you add a VM to your machine,
and then on top of that VM, you run Docker.
So last week on the show,
we had Mitchell fromell from vagrant and
looks like i got docker up and run pretty easily uh using vagrant on my machine yeah
that's that's the way we recommend it if you've got a mac or a or a windows machine just use
vagrant and vagrant will you know in our from in our in our case Vagrant does that is really, really awesome is if you've got an OS that's not supported,
it's a nice and automated way to stand up a VirtualBox VM and boom, install something on it.
In our case, Docker.
So that's really nice.
So it's funny because one of the things that Mitchell said, so obviously I'm a Mac guy, so I'm on a MacBook.
So early on in Vagrant's lifetime, they decided we need to support Windows.
That was one of them, not Mitchell, but I think his name is John, said we need to have Windows support baked in.
And so they did, right? right and you know now fast forward again number of years and are you guys thankful that you're
able to use vagrant on windows to get docker up and running um yeah i mean the vagrant's a really
cool project um they're actually they're right now they're there are more people definitely using
docker from max than from windows machines. But there definitely are Windows machines.
And from experience on the developer base,
on DocCloud in general,
obviously a lot of people use Windows.
I think there's kind of this San Francisco bubble.
We're a San Francisco-based company.
There's this kind of San Francisco,
Silicon Valley bubble world of, ah, no one uses Windows anymore.
But yeah, actually a lot of people do.
And if you want to be taken seriously, your tool has to support it.
So yeah, in general, I will say this.
We have a philosophy of not reinventing the wheel. That sounds kind of obvious, but we will take every opportunity
to reuse other people's work if it makes sense
and if it allows us to focus on the hard parts
that no one got to.
I mean, there are lots of examples of that.
One example is using Vagrant
because, hey, we could start by writing code
that automatically spins up VMs from Windows and installs Docker on it.
But that would be time we wouldn't be spending on more interesting parts of Docker.
So we're using Vagrant.
And, hey, everyone's happy.
It's easier to use Docker.
And a lot of people discover Vagrant actually through Docker.
I saw a lot of tweets saying, hey, I got two projects for the price of one.
I discovered Vagrant.
That's awesome.
Another example would be LXC itself.
There is an ambiguity actually in the word LXC.
It stands for Linux containers, but actually it can mean two things.
It can mean the component inside the Linux kernel that makes containers possible.
And it can mean the higher level tools, the binaries, the command that you run on your Linux box to make calls to those kernel facilities.
And both are called LXC.
So there's LXC, the kernel component that you never see as a user.
And then there's LX LXC command line tool and make calls to the kernel functions directly.
Because really, you know, what we're really going after is the kernel's capabilities.
That's where the heavy lifting is done, right?
In a way, the LXC command line tools are themselves convenience wrappers, higher level tools for using the kernel's features.
So we could bypass them, but bypassing them is work.
And the developers of the LXE tools have actually done good work.
They've tested it.
They've added these nice conveniences.
So that's another example. Just like we use Vagrant, we actually make calls to the ALXC command line tools so we don't waste time reinventing the wheel.
Right. Yeah, I mean, it's like where Docker's at right now, it's so young, it's so early in the process.
It's very exciting, I mean, to see where this is going to go.
So for you right now, where Docker is at, where would you like to see it kind of go over the next six months to a year?
So there's kind of two main things.
Over the last few weeks, we've realized that people are using Docker as a build tool.
So initially, the job of Docker was specifically given a container in the right format, run it in a guaranteed repeatable way.
And define the format, the executable
format, right?
Write the spec, standardize what it means to run a container, and share that standard
with the world, and show an implementation of it.
So in other words, the run part, running things.
And that's the core of Docker, and you can run things in a very
reasonable way, in a very portable way, et cetera, et cetera. And then on top of that, we saw people
using, building on top of that functionality to build their software. Because, you know,
running a container is one thing, but how did you get that container in the first place?
Who built it? Who put it together?
It turns out Docker itself can be used to put together your container step by step, layer by layer, in a really cool and convenient way.
It solves that problem of defining dependencies.
That became a pattern.
People started using Docker like that,
installing a base image
and then installing a Debian package
they were interested in,
then downloading a library
and dropping it in the right place,
installing, I don't know, Unicorn,
then installing the version of Ruby
they're interested in,
the gems they're interested in,
and all that layer by layer using our container versioning system.
And I don't want to go into crazy details,
but so that use case kind of evolved.
And as a result, you can use Docker for build and for run.
And so these are the two directions we're pushing.
We want Docker to be a better build tool.
And so literally you can Dockerize your app.
I realize that's a term now.
Is it really?
Yeah.
So Dockerizing your app means –
Well, I heard you say containerize at least in your documentation somewhere too.
Yeah, containerize things.
But Dockerize is shorter and i don't know i it's funny because the the i was
i did not come up with the the the the name docker i initially thought it was really bad and
sounded terrible and i i actually was i had the secret plan of convincing everyone to change it
before we we launched but then it got leaked and i never got the chance. And it kind of grew on me. I kind of like it now.
So anyway, so Dockerize.
Dockerizing your app means adding to your Git repository or to your source code a file called a Docker file with instructions on how to go from naked source code to full-blown container ready to run.
And usually that file is like five lines.
It's really simple.
It's basically like shell commands to run.
It's like apt-get install that, pip install that, gem install that, whatever.
And it's dead simple.
But at the same time, it goes from source code
to freaking full-blown container ready to run.
And you can hand it to someone, and they can run it on an EC2 machine.
They can run it on their VM.
They can run it anywhere they want.
And you don't have to give them any off-band information.
You just say say here it is
you can run it and that is really awesome so you know i want that to be easier because i think it's
just a really cool way to use docker and then obviously i want um you know once you've produced
that docker container you know i want it to be more useful in more places so there are people
today saying you know obviously hey, I use Red Hat
or I use this or that distro
or I have this version of the kernel
and today I can't use Docker containers,
I can't run them
because Docker doesn't support this distro
or doesn't support this version of the kernel.
So we want to widen the scope,
make running Docker containers possible in more places. And that also means
part of, you know, making it possible to run in more places involves things like allowing
for more customizations. Like there were a lot of requests, especially for ops or shops,
like ops engineers that already have a setup. They have a storage system. They have a networking system in place.
And they have a process manager.
And they like Docker, but they would like to bend it
to fit into their existing system.
And I say, obviously, we want that to be possible.
So there are a lot of integration projects.
And for integrations, you need nice, clean APIs.
So I guess that was a long answer,
but A, I want Docker to make it easier
to build your source code into a container
that can run anywhere,
and B, I want it to be easier to run that container
on any server.
Yeah, this whole time we kind of got through this,
most of this call,
we haven't mentioned Docker Registry yet.
I'm just wondering if I missed it or if we didn't cover that.
And it seems like it fits nicely into that feature you just painted.
Definitely. Yeah, it's kind of the link between the two.
When you build your source code into a container, the logical step after that is you want to make that container accessible.
You want to share it.
And if it's open source software, sharing it means, hey, you want every person on earth to be able to download it and run it if they want.
If it's private and it's your own code, it has credentials in it, it's not open source,
sharing might just mean, hey, I want it to get from the build server to the production server or on a scale out to 10 servers.
That's also sharing.
It's moving bytes around.
To share, you need some sort of infrastructure to move things around and discover the right container and download it.
That's what the registry is.
So the registry is a the registry is what does
that live at now can you see that we've put together a dot cloud it's uh well i mean the
primary way you interact with it is by typing the command docker pull right or docker push
so right now like if you install docker, fresh, the first command you'll probably run is something like docker run ubuntu bash, which means, hey, run the shell in an Ubuntu system.
Or docker run centos ls.
Show me the files in a new CentOS container. When you type that command, Docker figures out that you want to run
a container called Ubuntu.
It doesn't have it,
so it will automatically connect to the registry,
which is this publicly accessible place.
Think of it like GitHub for containers ready to run.
And it will download it,
and it has a very efficient way of downloading.
It's just like a Git pool, actually.
It will only download the parts that it needs.
So if it's already downloaded a prior version,
it will only download the diff, which is really nice.
And then it will run it.
And so the registry is this place, this API,
that we've put up for free to make Docker more useful where you can download other people's containers or upload your own.
Right. Since once you've built your code into a container, you just then you upload it to the registry and other people can share it, can use it.
So I guess it's it's the link between build and run.
Yeah, that's awesome. I feel like we could talk about that for a few months as this thing grows again to talk about where it's gone.
It's very exciting.
Yeah, we'll definitely have to check back in with you because we want to hear your six-month year goal and see if it comes to fruition.
And we'll obviously be there helping you along.
So in between now and then, anything we can do at the ChangeL Law to help you spread the word about the awesomeness of Docker, please do not hesitate to reach out to us.
We'll do whatever we can to help.
Thanks. I appreciate that.
So for people who have listened to the Change Law regularly, they'll know this.
But for anyone who's new, we kind of have two questions that we like to ask at the end of all of them, just to give you a chance to participate.
First one, Solomon, what would be a call to arms or somewhere where you would like to see the open source community get involved in Docker?
Well, I guess there's a general answer. answer besides the the the obvious uh you know try it use it report bugs come hang out on the
rc channel kind of get involved in any way possible we are extremely uh welcoming of any interaction
like we will never uh we'll never uh you know make fun of someone for making a really small
fix like every fix counts every question there is there was no stupid question. You know, the, the,
so, you know, we're, we're, we're very grateful of any interaction, you know, it means you,
you're interested. So check it out, ask questions. And then more specifically, I talked about
that really cool word, Dockerize. Um, I, I, I'm just really excited about that concept. I think
it's really powerful. It solves a lot of problems that I've ran into
as a developer
there are a lot of people
dockerizing their apps
dockerizing famous
open source software
dockerizing databases, frameworks
libraries
so my call to arms
would be try and Dockerize something
and tell us how it went.
Share it with us.
Right now, every time someone sends us
a cool example of software
they've packaged to run in Docker,
we're really excited.
We tell everyone about it.
Tell us and we'll share with the whole world.
And probably you'll hit a bug too. And then you should report that.
Cool. So our last question, who would you like to kind of give a shout out to as your
programming hero?
You know, I, I'm, I'm glad you allowed me to prepare for that one.
You know, I realize I'm not a very learned person when it comes to the giants on whose shoulders we stand on. But there is one guy that I've always been super impressed with.
I guess it's kind of a classic, but I don't know if you know this guy Fabrice Bellard.
He's this French dude also known as the author of QMU.
What's his GitHub handle?
We can link that up in the show notes. Basically, he has written at least half a dozen pieces of software that each individually would easily get him a place in the pantheon of coders.
But he wrote like six of them.
And he's just kind of, he's incredibly productive. He's the guy, I don't know, the most recent thing I saw by him is he's the guy who got
a Linux kernel to actually boot in a browser in JavaScript.
You guys remember that?
Vaguely.
I mean, anyway, it's like just one example.
He's just kind of, he never stops.
And it's kind of refreshing to see someone that productive. I mean, FFmpeg is the foundation of video processing.
It's the video processing open source software.
That's him.
QMU is a really effective and very helpful piece of virtualization software.
So it was kind of a stepping stone to virtualization.
And I missed it though. What is his, what is his name?
Fabrice Belard. Fabrice Belard.
Okay.
Belards. I feel like I'm butchering his name.
We'll definitely have to put that in the show notes.
Yeah, he's, he's awesome. And, and you know, that's the thing.
I've never, I've never seen tweets by him. Probably tweets, I don't
know. He's not like a... He doesn't seem to be someone who's cultivating his personal
brand or whatever. He's just writing awesome code and seems to enjoy it. We're all benefiting
from it. He's the good side of open source incarnate.
He's behind the Tiny C compiler. I mean, he's the good side of open source incarnate.
He's behind the TinyC compiler.
I know that.
Nice.
Yeah.
Anyway, I never met him, but I just picture him as this really cool guy.
That's awesome.
Thanks for plugging him.
We always enjoy, I guess, the surprises sometimes even.
Not so much surprises that you choose somebody who may not have gotten all the limelight
that some developers get
when it comes to open source contributions
and what they contribute and what they create.
So it's always good,
and it also helps our audience
and those who are enthusiasts of software
and the intersection of software development
and this open source world that we're kind of crafting a way at.
So it's really fun to share that.
But Solomon, thank you so much for joining us.
Andrew, thanks for asking so many great questions.
I definitely lean upon you today when it comes to the DevOps side.
I sit back and listen very closely and hope one day that I can be such a hacker.
You guys were great.
Thanks so much for – this is really a cool conversation.
Yeah, man.
And this is our first time here on 5x5.
So for those of you who are longtime 5x5 listeners and first-time Changelog listeners, we're here to stay.
5x5.tv slash Changelog.
Live every Tuesday at 5 o'clock you can tune in
as you normally do if you got the app
watch out for push notification and if you
didn't get it you need to go in your settings
and turn that on
for the changelog as well as
finders talk because I
host finders talk so that'll be live tomorrow
same time on Wednesday
but thanks again for tuning in.
Let's say goodbye, guys.
All right.
Thanks so much, Solomon.
I really enjoyed the conversation.
Thanks to you guys.
Thanks.
Thanks.
Thanks.
Thanks.
Thanks. We'll see you next time.