The Changelog: Software Development, Open Source - Deploying Changelog.com (Interview)
Episode Date: June 23, 2017This week we take you behind the scenes of the new infrastructure for Changelog.com and talk with Gerhard Lazu. We relaunched the new brand and site for Changelog on Phoenix/Elixir in October of 2016 ...and we needed a better way to reliably host and deploy the site. That's where Gerhard came in. We cover all the details and decisions in this show.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly.
Learn more at fastly.com.
And we're hosted on Linode servers.
Head to linode.com slash changelog.
This episode of the Changelog is brought to you by our friends at Sentry.
They show you everything you need to know to find and fix errors in your applications.
Don't rely on your customers to report your errors.
That's not the way you do it.
Use Sentry.
You can start tracking your errors today for free.
They support React, Angular, Ember, Vue, Backbone,
Node frameworks like Express and Koa
and many, many other languages.
That's just JavaScript I mentioned.
View actual code and stack traces,
including support for source maps.
You can even prompt your users for feedback when front-end errors happen,
so you can compare their experience to the actual data.
Hit the changelog.com slash sentry.
Start tracking your errors today for free.
No credit cards required.
Once again, changelog.com slash sentry.
Tell them Adam from the changelog sent you, and now on to the show.
You're listening to the Changelog, a podcast featuring the hackers, leaders, and innovators of open source. I'm Adam Stachowiak, Editor-in-Chief of Changelog. In this episode, we talk with
Gerhard LeZou about the infrastructure behind Changelog.com and how we deploy.
If you're just catching up, we relaunched our new brand and new site on Phoenix and Elixir October 2016,
and we needed a better way to reliably host and deploy the site.
That's where Gerhard came in, and we go over all the details and decisions in this show.
Alright, we're back today talking about the changelog.com infrastructure.
Jared, when we rolled out this new site, one of the things we wanted to do was have continuous integration, continuous deployment, and a whole new infrastructure on Linode and a bunch of servers.
And we recruited somebody that's super awesome.
And they're here to tell us about the backstory, basically, of the last several months of how we deploy.
Yeah, absolutely.
So Gerhard Lezou is with us.
Gerhard is very much the man behind the scenes, behind the curtains, so to speak, of our new infrastructure.
And the reason that is, it goes back a few years actually.
So Gerhard, we met in 2014.
Gerhard, I'm not sure exactly how we met you.
I'm sure it was on the internet, but you actually blogged for change.org.com back in the day
all about Ansible and Docker.
And that was our very first time meeting you.
Do you recall that situation?
Yeah, I remember that actually really, really well.
I remember when Docker first came out
and not many people have heard of Docker,
but everyone was getting excited
because it was solving a tough problem,
the problem of dependencies
and the problem of reproducible builds and runtimes.
And Ansible was something really, really interesting,
and it had a lot more in common with my approaches,
my prior approaches, than anything, than Docker itself.
Specifically, and I think that's how you actually came across
just my name again, in the context of the new changelog and Erlang applications.
I built this deployment tool called Deliver, very aptly named.
And Deliver was just a bash script, really,
a fairly complicated bash script,
which at the time was meant to replace Capistrano.
And I think for our real Ruby listeners, they will know the deployment tool, which maybe is
still the case today. I don't know. I haven't really watched that space too closely. But
Capistrano inspired me to build Deliver, which was later used for Erlang deployment. And some users
might know it as eDeliver. And since the new changelog application was Elixir used for Erlang deployment and some users might know it as eDeliver.
And since the new ChangeLog application was Elixir
and was Erlang-based, eDeliver was mentioned
even I think on the Phoenix blog
as a deployment tool for Phoenix.
Absolutely. Let me stop you right there
and I'll add some color to that situation.
So as many of you know,
we did a complete rewrite
of the Change.com website and CMS
last year in Elixir and Phoenix.
We have shows about that,
a couple of shows last year, Adam,
on Elixir and Phoenix,
where we get a little bit of information
about the backstory around that.
But when it came time
to actually deploy that application, I was very much green,
a novice. My background is very much not DevOps. So back when we used to call ourselves sysadmins,
like I was a network administrator and a server maintainer, a system administrator back in the day.
And so I'm very comfortable with the command line.
I'm very comfortable with dealing with servers.
I've deployed many LAMP stacks.
I've deployed many Rails apps.
I've deployed mail servers and relays and stuff like that.
But when it came time to take our shiny new application and get it out there for everybody
to use, I didn't know what the best practices were. So I went searching.
I may have actually found eDeliver, Gerhard,
on the Phoenix website,
but there were a handful of tools at the time.
This is about June of 2015.
Could that be right?
No, 2016.
Last year, yeah.
This was June, July of 2016.
And I found eDeliver, as you said.
And it seemed to be best of breed at the time.
I'm not sure if it is anymore for deploying Erlang applications.
And when I found that, it was based on Deliver, as you said.
And on Deliver, I found your avatar.
And I recognized your avatar because you had blogged back in 2014 for us about Ansible and Docker.
And I thought, you know who would be a lot better at this than I am?
Probably with the guy who wrote the tool that everybody's using.
I didn't realize at the time that eDeliver was based on Deliver, but you weren't actively
a part of that project, or maybe you were for a little while.
You had some commits.
And so I thought you were working on eDeliver, but it turns out that was just based on your project.
Is that right?
Yeah, that's correct.
eDeliver was actually Fork off Deliver.
But the ideas which it had,
the ideas about SSHing into multiple hosts in parallel
and running commands and all that,
that was actually one of the core fundamentals of ansible right and that's why
i kept joking how ansible was like something that deliver could never be because it had the community
behind and it was later sold to red hat for like 100 million so i missed that boat deliver a lot
more popular um and then maybe i would have sold it for like 100 million.
But the point is that the principles which Ansible was based on
were these really simple approaches, right?
Just like to managing hosts.
And that's the thing which attracted me.
And I realized how Ansible was like a natural continuation of Deliver.
But obviously with a much stronger community around it and a lot of
attention at the time. And it just made sense. It was like an easy switch because mentally,
it fits how I would approach deployments. What's interesting about that is Deliver and
eDeliver attracted me because like you said, they were a series of shell scripts. And so coming from my experience,
and I fought you tooth and nail probably a little bit as we went,
we laughed how old school my approach to everything is.
Because a bunch of shell scripts to me is my history of deploying apps.
Usually I just write a new one each time
and just rerun it to push out a new version and stuff like that.
And so that was very attractive.
But yet, once we got started, we didn't end up using either of those tools to deploy changelog.com.
I just wanted to mention, I emailed you first July 3rd of last year, 2016.
And once you said that you were interested, I sent back an email with a list of our needs
and our wants, and then things that I wasn't
sure about, and a timeline.
I want to bring up the timeline because it's
funny in retrospect. And I think
probably everybody who works in
this industry can laugh at timelines
and the naivety sometimes of developers.
So this email
is July 3rd.
And I said... Of 2016, last year. developers. So this email is July 3rd. What year?
2016, last year.
But in the email, I said
in the timeline, I said, we want to launch in July.
We have a lot of content
to import into the new site, so having it up and running
in the next week or two will help us keep
that time frame. And then I said, it'll
also be cool to publish our work somehow
to share with others and give you some props.
So this show is a part of that publishing. We also have some other ways we're going to go about
publishing as well. But I just think it's funny that, you know, how I was like, let's just launch
in a week or two. And we are very much when we launch Adam October, October. Yeah. Yeah. So
and even then it was like weeks and weeks of like focused content migration content updating from like old stuff that we had, old posts, old podcasts that just kind of needed to be massaged.
That was in a WordPress database that got pulled over to this and, you know, transpiled the markdown that, you know, that whole process.
It was a lot.
Tagging and adding guests and adding hosts, all this new stuff that CMS has,
as it just wasn't there before.
We were 90% done,
which meant we only had 90% left to complete.
And I don't want to act like it was Gerhard's fault
that we didn't launch right away.
It was all us.
But timelines are funny.
And then Gerhard, to that email,
first of all, you were very interested.
You were very excited about making the approach public because that's something that you've
been about open source and sharing what you work on
and sharing your findings. And you sent me a
list of questions. Neither one of us probably remember the list, but
I know because I just looked it back up. You sent us 17 questions to get
started.
That sounds like me.
I'm trying to paint the picture of what it's like to work with Gerhard because you're very thorough, you're very goal-based,
and you made sure that we documented everything.
Take us a little bit through the process of how you went from,
okay, here's somewhat of a random stranger on the internet
that I'm going to help
deploy an Elixir Phoenix application with a Postgres database to having a finalized system
that is our infrastructure and that works. How do you even know where to start with me?
Well, I suppose the first step was making sure that we keep the what from the how separate because there
are two distinct things and for someone such as yourself jared it's very easy to mix the two right
and it's natural for you to think about the how how are you going to achieve something in the
context of what you're trying to achieve so for me the first thing was to focus and drive out all the answers to my what questions,
just to understand what you're actually trying to get out of this. Because if you just wanted
to deploy an application, you would have done that yourself. It was a lot more than that,
right? So you needed the entire infrastructure to be configured and set up in such a way so that it can be easy to update, to manage, and so that
you don't have to worry about a lot of the details that go underneath. And I think, for me,
the most important thing was to understand your approach and also how you see infrastructure and
how you see deployment. Because based on on that we could have gone so many
different ways right so just like look at deliver e-deliver capistrano and chef and puppet and all
these things all these tools which are used for for deployments uh there's so many ways right of
skinning the cat but the point is figuring out what works for you, what you're comfortable with. You keep alluding to the old school approach and to the bash scripts.
And this was something that was easy for me as well.
It was a mental model that worked well for me.
I was comfortable with and I have already used it in a couple of production deployments.
These are just like WordPress websites and just different applications,
backend applications. And it was something which I worked on over the years and it just made sense
for me. So I knew that it would make sense for you as well. So knowing the how part fairly well
and understanding the mechanics, I could steer you towards the what and driving out the important things, such as, for example, backups, right?
Is availability important to you?
How important is it?
And what limitations or what constraints do we have
when it comes to the infrastructure?
Do we have API calls or do we have, like,
servers which come and go on a daily basis
or do we have something which come and go on a daily basis or do we have something which is more permanent?
And all the 17 questions which you mentioned
were just like a conversation starter
about how you see the world
and how you see changelog working in the big picture
because you went from an old infrastructure,
an old setup,
which was working well in some respects,
but it had some drawbacks. So you wanted to address some of the drawbacks, but also keep a lot of the things
which worked well. So this wasn't like a Greenfield project in the sense that you already had your
workflows, your changelog workflows, and we had to build something that would support those workflows. Yeah. Yeah, just to give a bit of an idea, the questions coming back were not
like, which version of Postgres are you running and stuff like that. It was these big picture
goals. Like, what do you, for instance, I do have the list here, just we don't have to go
into detail on these, but what would you like to happen when the website goes down?
Are you set on Linode? Because that was something that I had mentioned in the email
and that one's a little bit, I guess
you could say that one's a little bit specific, but do you want to see
logs from specific or all components? Stuff like that. So it's like
it's higher level questions than you would expect at first, even though there's
many of them
and they were conversations around like you said conversation starters around what we need
and not necessarily how we want those needs to be fulfilled exactly and also the process this
fits fairly well in the process which i'm'm very familiar with, specifically using a story backlog
of stories, and you separate things into features, stories which have value, which have business
value, and chores, which are stories that, you know, are required for the well running of the
team of the project, and also bugs, right, regressions or just stuff that you have already delivered.
You've already gotten points for business points in terms of business value. However,
some bugs were introduced and stuff which used to work no longer does.
So in that process, we used Pivotal Tracker, which just embraces this process of learning
and discovery and sharing what you learn.
And that worked well for us because we were a distributed team.
It was just a few of us.
We had very limited time.
I mean, I, for one, I only had half an hour every day.
That was it, right?
That was on a good day.
So what can you possibly do, achieve, right, in half an hour and do that every single day so that in like a month two months you
get to this point where you can like switch the infrastructure on and you have all these
high picture goals right uh dealt with and and addressed so knowing what to focus from
a user perspective right like your users and you yourselves are users of this thing
helped me prioritize things
and helped me just figure out what makes most sense, right?
Because as I say, there are so many approaches
and they all have their own merits.
And based on what I knew and based on the constraints,
it was the sensible thing to do, the right thing
to do. As years go by, I realized that learning about things and accepting that you will never
know everything and you will always discover something new and you'll have forgotten things, right? You will make mistakes. So working in a way that embraces this is really helpful.
Also, sharing your learnings and sharing your decision making and involving everyone else
around you is also very important.
So in that respect, having, you know, in our case, Pivotal Tracker, as I said, having that
tool to capture all this context, to capture all the commits, to focus on what we're trying to achieve and to keep us on point every single time.
I would start in the morning.
I would have that half an hour to do what I was trying to achieve.
And also having the conversations and having this, um, delay, right?
There was a time delay because I'm in London, right?
And you're in, you're in, I was waiting for you to say, I'm in Omaha, Nebraska, in Nebraska.
And you call, or as you called it in email, Oklahoma, Oklahoma, that's it.
Yeah.
Which is relatively close enough.
And I know Adam was there in the picture as well.
I think you had also someone working on the design of the website.
So you had like all these people, some haven't even met, right?
I mean, we two have, but I haven't met the others.
And I haven't actually even like talked to them like over emails.
I can't even be real. I can be fake.
Exactly right. Yes. How do I know that? You could be like the next version of Siri or Alexa.
Adam's alternate personalities.
Exactly. So working with a team like this, it required an approach that would make sure that everything we've decided and everything we did, there's always there, right? When you forget or when you wonder why was this done in that way or why is this thing missing? There's always something to
go back to and to try and understand, right? To understand why we made certain decisions.
And a lot of the teams and a lot of the code bases which I've worked on and with are lacking this,
which is so important like a year, two in right people change they come and go um and
the tools change right um so how do you preserve the original intent and how do you preserve what
matters over time yeah and this is one solution not saying it's the best it's the one which works
well so i'm prepared to change it when I will learn that
there is a better approach. But for the time being, given the constraints and given the goals,
this worked well, I believe. Yeah. So Gerhard has a talk he's given called Not Working Together,
which we'll link up the slides to that. They're on SpeakerDeck. We'll link that up in the show
notes as well. Gerhard, is that a talk that was taped? Do we have a video of that? Or are you going to do an encore presentation? Or what's the
deal with your talk? I'm not sure if it was recorded, the talk. It was only a 10-minute one,
which is a fairly short one. It was given at the London Ruby User Group. And it was given,
I think, November, I believe, I'm not sure, of 2016.
And it was trying to capture exactly this.
How do you approach working in a team which doesn't work together?
I pair every single day.
And I've been doing that for many, many years.
And I switch teams on a regular basis.
And having worked with many different teams of different sizes, I've been working with Pivotal and for Pivotal for many years now.
I've been working for IBM for a while. And they themselves have been consulting for all sorts of small and big companies and enterprises, how do you keep the context and
keep everyone involved and engaged and keep the information flowing, right? And the knowledge
flowing and the learnings flowing in a way that makes sure that when you finish a project or when
you move off, all that knowledge isn't lost. So that taught me a lot about how to approach
things in the way that it's like a team effort, right? It's
not one person, it's not one approach, it has to work for everyone. So it's
sometimes it's more difficult than it sounds, but ultimately it works really well when you engage everyone around you and you make sure that everyone is committed and involved in everything that we do.
And how can you do that?
You can't have rock stars and superstars just going off tangents and doing their own little thing. thing so in this way right sharing everything being able to switch approaches and being able
to consider other things i think is is necessary so coming back to our own little setup in comparison
to a lot of the projects most of the projects that actually been with this is very very simple
and very small and it has to be approached differently so you cannot use or well you could
but it doesn't make sense to use something a big platform as a service like kubernetes or
or cloud foundry or or even like docker swarm uh mesos there are so many these days uh they have
they have their own place and they have their own advantages. But in this context, considering the old school elements,
it just didn't make sense.
Yeah.
Well, let's pause here.
I have two thoughts.
First of all, we're going to get into the infrastructure
that we came up with and talk about some of the details
so that people can see how it all works
and why we came to those particular conclusions.
But also, I'll tee this up for you
and we'll ask it on the other side of the break,
but we talk about this list of questions
that you started asking.
And I told Adam I very much hopped on the Gerhard train
and I was like, okay, just take me on your process
and I will follow it,
kind of kicking and screaming along the way, of course.
But the question that I think
people are probably thinking or I would be thinking is, oh, you know, Gerhard knows which
questions to ask. But at a higher level, how can I get to a point in my understanding where I know
what questions need to be asked in order to come up with something that fits my scenario.
Coming up after the break, we ask Gerhard how he knows what questions to ask when setting up an infrastructure that has particular needs. We also talk about why Pivotal Tracker, believe it or not, is a crucial tool for his process.
How we're using Docker and the distinct units that make up our CI flow. Stay tuned.
This episode is brought to you by Hired. Hired matches outstanding people with the world's most
innovative companies. At Hired, your dream job is waiting to apply to you.
Instead of endlessly applying to companies hoping for the best,
Hired puts you in control of when and how you connect with interesting opportunities.
The best part is Hired is completely free to you.
It won't cost you anything.
In fact, they pay you to get hired.
Head to Hired.com slash changelog.
Don't Google it.
This URL is the only way to double the hiring bonus to $600.
Once again, go to Hired.com slash changelog.
And now back to the show. Gerhard, what I teed up for you before the break,
which I'm still curious about,
is how you know what questions to ask
when you're tasked with, you know,
set up an infrastructure for this particular need.
You asked me 17 things.
That was just the kickstart.
I'm sure there's lots of questions
that you asked throughout the process, but how do you even know where to start? Because if I can
know where to start, then I won't need somebody else to ask me the questions. Yeah, I don't need
you anymore. But how do you know? You just experienced it so many times that I won't
answer for you. Go ahead. How do you know which questions to ask? I think a big part of that is experience, definitely.
But also, you have to go back to the first question, which is what you're trying to achieve.
What are you trying to achieve?
What are you trying to do differently than you're already doing or you already have been doing?
So are you trying to continue the same process
or trying to introduce a new process?
What matters to you?
What is important?
And a lot of the times,
just people don't answer that basic question.
And obviously, they can never achieve, right,
what they intend to achieve
because they haven't even stopped to think,
what is important to me?
I mean, if you
start looking at the tools and picking your tools before you even know what you're trying to achieve,
how can you be successful? So I suppose focusing on what really matters, such as, for example,
do you care about daily backups? Do you need daily backups? Okay, so that's like one thing. Is there like any legacy
content to migrate? Okay, what type of content? And just trying to understand the problem and
you're here, it's basically, it's just, it makes you outline where you are and what you know,
and being very clear about what you don't know. Right. Yeah. And another, for instance, on that.
So just thinking back to some of our situation specifically with Changelog,
which these are things that Adam and I know almost inherently
because we live it, right, in our work and in our process,
is we know things about what we need that you don't know
as a third party coming in.
And so one thing that I said to you, which I think you keyed off on early on, is that if we have a couple hours of downtime, now we're going to be mad about it.
A couple of minutes will be fine. A couple of hours might be upset, but our business doesn't
tank, right? It's not like Amazon where for every second they're down, they're losing X millions of
dollars in revenue. And so I said something to you like, this is our goals, this is our needs.
Of course we don't want any downtime.
We want to know about it as soon as the website is down.
But if the website does go down, it's not going to put us out of business.
Unless it never comes back up again.
And in fact, we had a little bit of downtime.
More frequently.
If it happened frequent, then it might be bad.
Sure, but just knowing that we don't need those five nines or six nines, or I don't know how many nines people need nowadays, was something that informed you on the type of solutions that you could come up with.
And frankly, in that case, things that we don't have to do, which other people might have to.
And so we can come up with something less complex than you would otherwise
if we required that always on. So insights like that, that's why your questions were like,
what would happen if the website go down? Do you want daily backups?
You also asked us about legacy content. You asked us about existing relationships. So
we have service providers that we work with, Linode, Fastly, others.
And so we had conversations around those things because, of course, that's going to limit certain choices as well.
But let's get back to before we get into the guts of it, the Docker and the Ansible and the Concourse CI and all that good stuff and the way it all works. Let's get back to the process a little bit because on the other side of the break you mentioned
working remotely together and the
situation with the constraints that we were in
and how we use Pivotal Tracker
to communicate basically
this process.
This is not meant as an ad for Pivotal Tracker by any
means. This is a tool that Gerhard likes
and I was happy to use.
Tell us, because people hear
we use Pivotal Tracker,
and perhaps that's just like,
okay, well, that's fine.
That's how you do things.
Where's the real goodness out of that?
But for me, it was the way you went about using Pivotal.
And this would work just as well in another tool like Trello.
But it's how you use it,
which to me was unique
because I've been on lots of projects that use tools like these.
And you had a certain level of thoroughness and particularness with how we went about it that ended up, as we were doing it, I was thinking, man, he sure is a little bit, how do I say it kindly, persnickety?
That's not kind.
Particular about how this tool is being used, right?
But at the end of the day, I saw, oh, there's a lot of value there because now we know everything's in there.
Go ahead and break out for us how you go about this communication
with Pivotal and why that was so important to success.
So I think, first of all, I've used many different tools,
but Pivotal Tracker is the one which embodies the extreme programming process the best.
Understanding the extreme programming process is important.
And Pivotal Tracker is the tool which is a means to an end, not the end in itself.
And that's important to remember.
It doesn't matter what you use.
It's just a tool, right? It's your process, which you mentioned. So for example, separating the what
from the how, right? What we're trying to achieve and how we're trying to achieve them, just keep
the two separate. So when you define the stories, the units of work, the perspective is always what you're trying to achieve.
So not being prescriptive about how that will be achieved or how should people go about their jobs.
Right. That's that's that's not how to approach a story.
It's always focusing on who benefits from this and what are they benefiting from in the first place.
So describing the why, why are we even doing this in the first place? So describing the why, why are we even doing this
in the first place? And once you do that, once you have like this beginning of a story, then you have
the place where all this context can be attached, right? So for example, developers, engineers,
software engineers, they go and like make changes, make code changes, and the commits, right, how they
change code can be linked to a story.
And that is very, very important
because then you can see how things are changing
and the context, the business context
in which they are changing.
Also, we always have conversations, right?
About things, how we're approaching things,
we're making decisions all the time.
A lot of them are not worth capturing,
but some of them, especially the crucial ones,
and again, like you need to, I don't know,
just be sensible about it, I suppose.
I mean, not everything is important,
but you will know when something is worth mentioning.
And I think a lot of it is discipline, to be honest.
You know, taking the time and having the discipline
to capture those things and trusting that eventually, you know taking the time and having the discipline to capture those things um and
trusting that eventually you know someone will be very very thankful that you've done that
it's i suppose the same approach to commits right i mean maybe those are easier to understand
when you do your commit summary how do you do your commit summary and why do you do your commit
summary there are some very good blog posts out there which go into great detail about this uh but the point is the same
right knowing why are doing certain things and why they're important and having been in situations
in which you wished there was more information you wished the why was captured why did this happen
i can see how it happens maybe if i can understand the code but why did it happen? I can see how it happens, maybe if I can understand the code, but why did it happen? And a lot of the time, the why gets always missed, whether it's the business why, whether it's like the code why, whether it's the infrastructure why, but it's very important because then, like we have this workaround in some places, we have many workarounds actually in quite a few places, but there's always a good reason. And the people that did those workarounds were not stupid. They were not trying to make
your life difficult. They had to make certain trade-offs. Understanding what those trade-offs
were and understanding why they chose something is the most important thing, not what was chosen.
Yeah. I mean, I think the word discipline that you said there is excellent.
I wish I would have thought of that,
because that's exactly what I was trying to describe.
In fact, you're so disciplined with the way that you put everything
into the project in Pivotal Tracker that I even wondered,
was this guy in the military or something?
Because it's like that level of discipline that you don't see
in too many people with the use of a tool.
Like, I'm going to use it this way, I'm going to use it this way every single time, and
everything is going to go in the way that we plan on it.
And I think it's because you think about the Pivotal Tracker project differently than I
was, and that other people perhaps do, in that I didn't see any value in it as an artifact, as a documentation or
a reference point once the project is over with or once it's moved on to other phases.
I think of it, maybe it's because of the way that I use even Trello, it's almost ephemeral,
right? Like things come and go and they move around. Adam and I open and close boards all the time.
We use them undisciplined.
But you were using it in such a way that either it lives in the code
or in the commit history
or it lives in the Pivotal Tracker project
as open and closed tasks and chores
with conversations and histories.
And we're going to use that
and we're going to refer back to things.
It's documentation at the end of the day.
And I've actually used it that way
since I've needed to see,
why did we do this?
And I go back into the project
and I see exactly why,
because like you said,
the whys were captured in there,
but they wouldn't have been
if you hadn't been so disciplined
because I would have emailed you
or talked to you about it on Skype and forgot about it much later.
Exactly. I think that summarizes it really, really well.
And I think everything starts from a very simple concept.
And that is, it's not about you.
It's about everyone else around you. So if you're doing your job in a way that will always, always benefit the others, then that will start changing the way others around you approach their work.
And they will behave the same, which means that you will benefit from what they do.
So the selfishness is removed from the process and that changes the team dynamics in a way that
i think it makes like the team and the workplace a great great place to be in and um everything is
pleasant everyone knows everything everything is easy to find um if you forget thing it's fine you
can always go back if you make a mistake it's not a problem
right because everything around it is built in such a way so that either someone will learn from
that and will improve things or you will just discover something that no one has thought about
before so you can't make mistakes uh not any mistakes which are bad right everything is a
learning opportunity and everything you do,
you're sharing it with everyone around you.
It does take more effort.
It is more difficult,
but it's so much more satisfying.
I mean, even open source, right?
All the tools which you use,
it's other people doing things for everyone else
because they believe it's the right thing to do.
One thing on Pivotal, I guess for me, is I use this tool
and again, as Jared said, this is not an ad for Pivotal Tracker
although I do have some extreme appreciation and
respect for the tool because it requires you, Jared,
as you mentioned, the discipline, the attention to detail,
the particularness,
it requires that of anybody leading a team through this, whether it's two people or 10 people,
there is a way you use Pivotal Tracker that gets you the result you need, which is
thoroughness through a process. And I used it in an agile process. And with two people,
I'm just kind of curious on like, maybe this is going too far in the weeds to it, but like, you know, why this tool is the choice you choose?
Is it the feature set of the tool or is it because it's so rigid and it's how you use the tool to get to the goal of like completing a feature set or something like that?
It has its drawbacks and it does have its sharp edges, as any tool does.
However, from all the other tools out there, it's the one which, as I said, embodies the XP process the best.
And shifting our focus a little bit from Pivotal Tracker to something else that we're using, which is like Ansible and Docker, both of those.
They're simple and they are, for example, Docker. It makes it really, really easy to get started and it handles with compartmentalizing state and just ring fencing around dependencies really,
really well. It has a lot of features. It's added so many
features in the last months. Most of them, most of the new features, I'm not even familiar with.
But the point is, when it comes to sandboxing the runtime and making the runtime reproducible,
Docker does it really, really well for developers, right? It's really easy to get started with it.
Now, Docker is, well,
it's a lot more than just like a container...
I can't pronounce this word.
This is a difficult one.
Containerization?
Containerization, is that correct?
That's right.
I think you were trying to say Mobi.
Mobi, yes.
Well, now it's called Moby.
You're right.
I still go to Docker.
I do too.
In fact, I was looking,
as Gerhard, as you know,
our Docker instance had an issue.
We had a bug and we couldn't,
our deploys were failing and I was looking,
this was right when they were at DockerCon
a few weeks back or last week.
I don't know when it was.
And they had just renamed Mobi.
And they redirected Docker, Docker to Mobi, Mobi.
And I was in the middle of trying to find a Docker bug.
And all of a sudden I found myself on this Mobi repository.
But I hadn't seen the announcement yet.
And I'm like, what is going, who's Mobi, Mobi?
I didn't know what was going on.
Somebody was messing with your DNS, man. I didn't know what was happening. And I went and checked Twitter. I'm like, what is going on? Who's Moby Moby? I didn't know what was going on. Somebody was messing with your DNS, man.
I didn't know what was happening.
And then I went and checked Twitter.
I'm like, oh, okay.
Now I know what happened to me.
There you go.
Anyways, you were saying containerization.
Yes, that's correct.
Containerization.
So when it comes to containerization,
it's really simple and easy for developers to just use it.
So there are other technologies, such as, for example, Garden, which I'm fairly familiar with. Maybe most of the
listeners aren't. But the point is, Docker isn't the first one, but it's the one which made it
really easy. And even though they've added a lot of features, which I think are moving it away from
what it used to be, it's still the easiest way
to get started it's very self-contained it's fairly predictable yes it does have its bugs
as we've discovered and as we've seen yes there is like some fragmentation and some things which
you know i wish they were better but overall it works well and we didn't have a lot of issues with it. We had some,
right? It's impossible not to have any
issues because it means you're not using it, you're not changing
it, right? You're not updating it. So we have
come across a few bugs.
Did it take the system down?
I don't think so.
No, it did not.
The deploys failed, yes.
There were some instances which piled
up of changelog.com.
The database had too many connections,
but it was fairly easy to just stop it and start it
and get the pipeline unstuck, and off it went.
Right.
Same thing now.
Real quick, real quick.
I know we moved a little bit past Mobi,
but while we're still kind of in the situation of Moby,
Adam, we should mention that we have Solomon Hikes
confirmed to come on GoTime on May 18th.
Yes, that'll be a live show.
A live show.
Go to change.com slash GoTime to subscribe.
Or do you guys know we have a master feed?
Change.com slash master.
Just get all of our shows.
They're all good, right?
They're all good.
Get them all.
If you're listening to this before May 18th, subscribe.
If you're listening after May 18th, well, you missed a live show.
But the new show, the published show will be coming out soon.
It might even be out there. So go listen to that.
All about the rename and that whole deal.
So just mention that as a sidebar,
Gerhard, real quick,
before we get to the next break,
because we're going to talk about on the other side
what went well, what didn't go well.
But one thing we haven't done yet
is just to give the lay of the land
with regard to what is the changelog.com infrastructure.
So if you had to give a lightning talk
about not how we went about doing it,
but what we ended up with,
describe to somebody what it is and how it works.
Give us that. We know there's Ansible, we know there's Docker,
but explain it to Adam like he's five.
That's right. I'm five. Help me out.
All right, Adam.
So ci.changelog.com is what manages
all our infrastructure and the application lifecycle as well.
It's powered by Concourse.
It's this newish CI.
And the runtime, as I've already mentioned, is Docker.
Ansible does all the heavy lifting. So we have Ansible playbooks and they
capture the configuration for a specific host type. So the application host, for example,
or the CI host and those hosts get configured accordingly. When it comes to the application,
our listeners already know it's Elixir, which is running on top of the Erlang VM, which runs inside a Docker container.
And PostgreSQL is a database.
Elixir connects to PostgreSQL.
And Nginx, there's Nginx in front.
Nginx proxies request to the Elixir application.
And in front of all of that we have CDN. We're using Fastly for that and that's fronting and distributing
all the static content, all the MP3s,
all the episodes and all of that.
There are two repositories, both hosted on GitHub.
One of them is the infrastructure repository
which contains all the code and basically all the glue
holding all the services together
and the application which is already open sourced all the services together. And the application, which is already open-sourced,
the changelog.com application.
When it comes to the services that we use,
all the credentials, they're stored in LastPass.
That's where all the credentials are stored.
And when we configure our CI,
we pull credentials from LastPass.
We do that by just either one of us
can just pull the credentials via the LastPass. We do that by just either one of us can just pull the credentials
via the LastPass CLI.
We configure the CI
using this tool called Fly.
It's self-contained and it's
very easy to use and fairly self-
explanatory. And I think that's
it. What did I miss, Jared?
Hmm. I think
backups, you didn't cover
backups, but you said the CI.
So one thing I'll just be clear is that
so we're on, Linode is our host,
and so we have Linode VPS,
so we have two of them.
The main one, which is the change.com,
and then the one that hosts the utility application,
which includes the CI.
Just more with mentioning that each of our distinct units,
our application server, our web server, and our database are separate Docker instances on that one same host.
It's a pretty beefy server, though, the main one.
Yeah, nice and fat.
And it's underused, great underused, I have to say.
Yes, one of the advantages of this, and this has been a desire of mine very much from the start,
is we may never
have to scale out.
We can probably just continue to
we're not utilizing the server as is,
but we can probably just only have to scale up
and may
never have to scale out, just at the
traffic levels that we get and the
speed of Elixir and
the beefiness of the Linode VPS that we're
on.
And I think that's excellent because it keeps our infrastructure very simple.
And the CDN in front, make sure the content is properly distributed and all that.
Yes.
So that is a big, big advantage.
Also, the backups, right?
Having a single instance, it's really simple to just basically archive everything,
like the whole application, the whole database, everything, the whole lot.
And you have this massive tarball, and you store it on S3.
It's a full backup.
It's self-expiring, which means that after so many days or so many weeks,
you can configure your S3 bucket where we store the backups to self-expire the objects.
It makes it really easy to just pull any backup
and just restore it,
and you have a full copy of the entire changelog.
Now, it would be not too difficult
to store it on different hosts, right?
If you had a database host or whatnot,
but it would still mean like more components
and you have like network in between,
all that stuff, which complicates things.
So it's simple.
We could have downtime,
but it'd be fairly easy to redeploy
and reconfigure everything,
the entire change log on any VM
or on any cloud instance.
Doesn't matter where,
doesn't matter with which provider.
That was my next question, was thinking like a listener might be thinking,
well, you chose Linode, which is a great partner of ours,
but if for some reason that relationship changed
or for whatever reason another cloud was better for us
or for whatever reason we needed to move,
whether it was for redundancy or a simple migration or whatever,
being able to move to a different cloud.
So that was part of the 17 questions, I'm assuming,
or part of the early requirements set?
Yes, that is correct.
I did ask if you want to use multiple cloud providers for redundancy
or if you have any preference.
However, you will need backups, right?
I mean, it doesn't matter who you're
hosting with you do want to have full backups i mean data corruption data loss all sorts of things
can happen right i mean it's not just downtime that you have to worry about and full redundant
backups you know stored off site are important now not everyone can do them right i mean some systems are too big right
and you just can't do them however for the changelog and for the majority 99 of the systems
out there you you can do full backups yeah and that's why you have like the whole push towards
microservices and all that where you have smaller components because the components are getting too big, have too much state, too much knowledge,
and too much responsibility.
And it's really difficult to have all this in a single place.
And how do you even back that stuff up?
How do you recreate it?
How do you scale it?
So it's different trade-offs.
But we definitely do not need a distributed, you know, always available, always on sort of system.
So why have one in the first place?
I'll also mention that our AWS bill is quite inexpensive too.
I think just searched my email real quick to look at the latest bill, and it was $4.76.
So you're seeing that they self-expire these backups.
How many days do we go back for self-expiration?
Do they eventually just delete themselves, or how does that process work?
Do we store just endless amounts?
Do we have last year's backup there?
How far back does it go?
So I think when we configured it
i think jared configured this but we uh decided to go with seven days um so every day daily right
we take full backups and i think they're close to 12 11 or 12 gigs and so you have seven times that
and when you set up the buckets as you can like version bucket you can configure
different policies on the bucket and like different options one of them is just like expiring objects
which are older than a specific time period so that means that you don't have to manually or
like even have like an automated tool or ci or whatever job to delete things or a cron job, whatever might be the case,
AWS takes care of that for you, which is fairly simple, right? All the data incoming into AWS S3
is free. So you don't pay for that. If we were to restore from backup, you know, the bill in that
month would be slightly higher, but it doesn't happen a lot. Yeah. That's a bill you gladly pay
too. It's like, we say it ourselves thank you here's
the money exactly yeah exactly so when it comes to restoring from backup that's something which
we need to automate uh we did have a couple of stories with it manually just to make sure that
everything works and you know it's it's still it's still correct so to speak uh but we still
have some outstandings such such as automating this restoration
process. As you know, your backups are only so good as your restore process
is, and right now it's manual, it's documented, well documented.
And it's used. And it's used, exactly, because I think you use it fairly often, Jared.
Yes, so it's manual right now, but I will pull down
our full backup. By the way, there are about 17 gigabytes now, so it's growing.
But yeah, we just have seven or eight of them up there.
I'll pull down the whole thing and extract it into my development environment and use it.
So for two reasons.
First of all, it's a manual process to make sure our backups are still working,
which is worse than an automated process, but is better than no process.
And secondly, because I like to develop with real data
that has recent episodes
and all the imagery that we've uploaded and stuff.
So I will do that, not on a weekly basis,
but every other week or so.
Man, I would love to have that.
I didn't even know you were doing that.
I want that.
I need that.
I'll hook you up.
You know somebody, right?
I do.
I know a guy behind the guy.
We can ask Gerhard how to do it.
After the break, we talk about why Gerhard chose Ansible and Docker
over something like Kubernetes.
We also talk about our potential lock-in to the hosting provider we chose.
Linode, as you may know, is a partner of ours,
and everything we do at Changelog
is hosted on Linode servers.
But the question is, are we locked in,
or are we free to move to another hosting provider
if we want?
This question brought up our tie to Docker.
Are we locked in to only using Docker?
Stick around to find out.
This episode of the Changelog is brought to you by Microsoft and Azure Open Dev Conference.
The event is over, but all the talks are streaming on demand right now.
Head to azure.com slash open dev. This conference is focused on showcasing
open source technologies in the cloud.
Learn how you can build containerized microservices
and improve your open source DevOps pipeline.
Hear from community leaders like Gabe Monroy
from Azure and Deus,
Michelle Noraly from Kubernetes and Helm,
and Scott Johnston from Docker.
Learn about app platforms, containers, DevOps, and more.
All this is provided at no cost.
Once again, head to azure.com slash open dev. so Gerhard in the in the break we often ask some questions and before the break Jerry was teeing up
sort of this retrospective look back to see what went well you know what didn't go well and kind
of like see how we can move towards the future. And that got me thinking about the episode that we haven't released yet,
which is coming up very soon on Kubernetes.
And I've just been thinking about like this whole conversation,
this what you've built for us seems very bespoke, very particular for us.
And since that Kubernetes conversation, I've been thinking like,
why maybe why didn't Kubernetes fit for us?
So maybe in your own words,
what questions do you often get asked whenever you build something specific like this
that isn't Kubernetes or Cloud Foundry or X,
whatever it might be?
So when I started with Ansible and Docker 2014,
Kubernetes, I think it was only just starting
or not even started.
It was like very early days.
The idea existed, but I don't think the product existed.
Kubernetes came a really long way when it comes especially to statefulness, right?
Handling stateful data.
I think it came a long way.
When it comes to data services and like PostgreSQL specifically, I mean, even now, I believe Kelsey Hightower
recommends to not run any database inside the container.
We do that and it works well.
It's all right.
It is production, it is a production system.
We didn't have really any issues,
but I know there are many, many reasons
why it's a bad idea, right?
But it works, right?
In some cases, it works.
So with the latest Kubernetes version, I believe it's 1.6,
I think it's almost there as far as we are concerned.
So we're very close to being able to actually use it.
It's still, the question of complexity still remains.
I mean, Ansible is fairly simple.
I mean, when you look at what we have,
I know it's not public yet,
but I hope it will be very soon.
It's fairly simple, and yet still things go wrong.
The Kubernetes surface is very big.
The community is different, of course,
and I'm not suggesting that Ansible and Kubernetes
can even compare them,
but as solutions for what we need, Ansible was easier at the time, right?
I knew it better.
I had less questions about it.
It fit the old school mentality fairly well.
I think now that Kubernetes is a lot more mature, we can start considering it.
And, you know, very slowly, we can start migrating components across.
Why not?
But you will still have these extra layers of things
and all this documentation to go through and understand
and to keep up with changes as they happen.
What we have is simple, imperfect in many ways,
but it works.
I'm just getting to learn our current stack.
You're going to switch me to Kubernetes all of a sudden?
Well, I'm not pushing for it.
I guess I'm playing the advocate out there who's listening to the show thinking,
I've been hearing nothing but good things about Kubernetes
or other systems out there that essentially help you.
They basically took the 17 questions, I'm assuming,
this is all assumptions, assuming that the 17 questions that Gerhard asked originally
was something that the overall community of Kubernetes asks or something like that asks
when saying, let me automate your infrastructure.
Let me build out your infrastructure and make it very command driven, as we heard about
on that show,
rather than maybe not so much the complexity,
or even in your case where you say simple, Gerhard,
to not have to do it yourself or think through it yourself and remake the wheel each time.
You seem like a master craftsman with a lathe,
where Kubernetes may not be generic,
but it's a global system that fits a lot of problems and pretty well.
And it's evolved over the last year.
Yes, I definitely agree with that.
All these platforms as a service, I mean, I work on one day in and day out for many years now, which is Cloud Foundry.
There are some similarities.
Again, they're not the same, right?
It's difficult to compare because they're not like apples to apples. The point is,
just as VMs and all the virtualization used to be a great thing and new thing and this exciting
thing many, many years ago, so are all these platforms as a service. And they're maturing and they're coming to a point
where it's easy to run your WordPress.
It's still not straightforward.
You still have to jump through a few hoops
and adapt a couple of things.
And I think for a lot of people out there,
they do make sense.
But when it comes to Elixir, when it comes to Erlang,
when it comes to Phoenix, even though we don't use
a lot of the Erlang VM features,
I think because of all the content which we have
and the way our workflow works and is,
our workflow, it makes it easier to use something like this
because it is very, very bespoke.
And when you're trying to do migration such as this,
I think you need to be careful as to how much you're trying to do migration such as this, I think you need to be careful
as to how much you're changing.
Every single thing should be a stepping stone, right?
It shouldn't be too big
because you will never finish it, right?
It has to be small enough.
And yes, it will never be the right thing
because you don't have time
to keep up with the right thing.
As long as you're moving in the right direction,
that's what matters.
And I think we are, right?
We are using containers. They work. We might not be using Kubernetes. However, we're
much closer to using it than we were a year ago. I think that's the point.
Yeah. Yeah. I mean, if I would have set this entire thing up myself, we would be tied specifically to
the VPS because I would have installed the entire system directly on the quote-unquote hardware.
And a migration to anything different
would have been a huge undertaking
that we probably would have said,
ah, it's not worth our effort.
But with this system,
Gerhard has set us up to have the flexibility
and the capability of moving not just hosts,
but container platforms?
Maybe not container platforms.
I don't know, Gerhard, could we
switch off of Docker?
Yes, you could. You definitely
could. Docker is not.
One thing which I'm trying
to emphasize, and maybe I'm doing it poorly,
is that we haven't
been focusing on
the stack. We haven't been focusing on the stack, right?
So we haven't chosen Docker for its features.
We've chosen Docker for what it offers us
based on the goals, right?
So based on the goals and based on where we were,
it was an easy step.
It was an easy transition.
I think that's important.
Making steps small enough and manageable enough
so you can keep doing them consistently and you can keep moving towards where you're trying to get to because that changes in itself.
Change log today, I'm sure, is very different than it was a year ago, right?
You have more shows, you have more content, you have more listeners.
So the change log landscape is changing, right?
Pun intended.
Same thing for all, you know,
all the containers and all the platforms. And one day, I do hope that my quote unquote,
homegrown system can be replaced easily with something that just handles all the complexities
itself. I would very much like that. I would very much like that to not have to work at a low level but work
at a higher level and as soon as systems are generic enough and easy to consume enough we will
do that um because it's always it's like a flow right it's a it's it's never static you've never
arrived you have to keep moving and you have to keep going because the landscape is changing
so you need to put yourself in a position where it's easy to do that,
where it's easy to respond to change.
It's easy to shift and move.
And Kubernetes, yes, it is the biggest and the greatest thing today,
but I can guarantee that two, three years from now, it won't be.
There'll be yet another big, great thing.
If you've learned anything in the years of computers is that
every 5-10 years
the landscape is completely
different.
That's the hard part.
I mean come on.
Why aren't you guys using React?
I mean it's
and then by the time I say that
it'll be Preact or it'll be
the next big thing.
There's always a next big thing.
And we like to talk about the next big thing.
And sometimes we like to use the next big thing.
But oftentimes, life is better two steps back from the edge.
And you watch the edge.
You keep up with the edge.
You talk about the edge.
But you talk about it from a little bit of a distance.
The safety.
Yeah.
Yeah. Yeah.
And it's all about needs, right?
Like if we had, like, if you said,
hey, let's switch to Kubernetes,
my first question would be, well, what do we gain?
Like what, besides this to tell people,
oh, it's built on Kubernetes, and that would be cool.
So you gain the cool factor.
Let's not discount that.
It's real, right?
Especially when you run a media company
all about keeping up with open source software
you don't want to get too far behind
but what would be the tangible benefits of doing that
and can we measure those against the cost of change
and so where I would have ended up
is usually the ROI on those changes would have been far too low
because the change would have been expensive
whereas where we are
now, which feels very nice,
as Gerhard has said,
we've reduced the cost of change
by keeping things simple, small,
and containerized as possible
so that if we do
have a huge benefit, maybe I'm
wrong and all of a sudden we need
40 instances
across multiple regions
and we're going to scale out real fast.
Well, I think Kubernetes is probably positioned
to do that better than our setup.
But what's the actual gain of switching?
Those are the kind of things you got to ask yourself.
If you think where we came from though,
which is sort of what a retrospective is,
where do we come from?
We used to deploy a WordPress theme via rsync and we version control the theme itself and ship
that to github but it was essentially just an rsync push so to speak through ssh to the server
and it dropped what was there and replaced it and that was us updating the site before in terms of
how it worked it was wordpress and
then you had to go in and do wordpress themes and plugins and things like that so nothing against
the wordpress landscape or the stack that is that it's great for some people it just didn't fit our
future that's where we came from it was just like it just didn't fit the hacker to the heart mindset
that we have now whereas like now gerhardt, thank you. You've definitely taken us into the future. So no knock against
at all what you've done. Just the comparison of some people
are going to be thinking, okay, you did this. Why don't you go bespoke versus go
in quotes, mainstream Kubernetes?
Right. Well, I think Kubernetes isn't mainstream.
It's becoming. It's certainly the name now.
Amongst large corporations.
You see all the big names using it, but where are the one-off
agencies, the small startups? I think they'll get there as well.
This is that year, I bet.
Okay, you heard it here first, Adam Sack. On the record, this is that year.
This is that year.
This is that year. So I think it's easy. It's very easy and it's very exciting to focus on the tools and to focus on what is cool. But it's easy, very easy to forget. Even I do that myself.
They're just like a means to an end. They're not the end in itself. Just as I said about
Pivotal Tracker before, it's not about using Pivotal Tracker. It's about using something which helps you
follow a process which has proven itself time and time again, and it fits really well most
development scenarios and most teams. Same thing with Kubernetes or Cloud Foundry or whatever else,
whatever the next big thing will be.
I mean, right now, they're still very smart.
They still require, I mean, if you,
I don't know the Kubernetes stack
as well as I know the Cloud Foundry stack,
but there are many, many complex parts
which are just like put together
and they have a very specific role
and they mostly work well, but sometimes they don't.
And I get to see a lot of
the scenarios when they don't work well. I get to debug those scenarios and it is complicated,
right? You need a lot more people to maintain a system like that. So the question would be,
why do I want this six, seven hosts, easy to, or whatever VMs deployment of a platform as a service to manage
this one application. Doesn't even make sense. Of course it doesn't. So then you go to like a
hosted solution. And there are many companies which already do that. They do offer Cloud Foundry
hosting Kubernetes. We could go to Google directly and say, hey, can we just like run this on your
Kubernetes deployment? That would be ideal. and you would find yourself that that migration from where we are now it is easier than if we
had gone from the wordpress days right and like the wordpress thing so it is just it is just a
transition and we are going towards the future right however as jared said the future is always
a few steps ahead and you have to like you you know, pause and just like just be mindful of the might of the landscape because they keep shifting and moving.
And what was great yesterday or today, it won't be so great tomorrow.
And there will always be drawbacks.
And we never know all the failure scenarios because it hasn't been in production long enough.
So it's discovering all these things, letting it mature, as some say,
and then when we are comfortable that we're gaining enough,
making that change, but always focusing on what is important for us,
how easy it is to achieve that, and is it worth it?
Because maybe it never will make sense to go to Kubernetes.
Maybe what we have now is enough.
Real quick, we're talking about our migration from WordPress
and where we come from.
Humble beginnings, Adam.
If you recall, before WordPress, there was Tumblr.
And so WordPress had a lot of Tumblr baggage
in the form of a whole boatload of redirects
from old Tumblr URLs to WordPress URLs.
Now, Gerhard, you know we brought with us a lot of redirects as well.
Some of the fun of getting a new system set up was you and I
debating about how we go about implementing those and where they fit into the stack.
Because, of course, with NGINX in front and the application routing
and a CDN, we could deal with those in multiple places.
We end up having Nginx do it.
But Adam wanted me to bring the Tumblr redirects along with us too.
I'm a pack rat.
Yeah, a little URL hoarder as he wanted to keep those redirects
from years and years ago.
And I had to show him, no one's hitting those, man.
That's not worth the effort.
But we did bring with us a bunch of WordPress redirects,
which I hope was worth the effort.
Real quick, because we're getting low on time,
Gerhard, let's do our quick retrospective.
And the three of us together can talk about what went well,
what didn't go so well,
where we're lacking, where we're
great. What are your thoughts?
I think our
biggest achievement was
capturing
the entire lifecycle
of the infrastructure
in a pipeline.
We have concourse to thank for that.
The concourse CI, which
the URL is concourse.ci, very convenient, the domain.
And it allows us to have quickly assemble a view which makes sense
to us. So it's easy to see when something works, when something doesn't work. And it's easy to
just like queue another backup, right? Queue another deployment if you want to do it manually.
A lot of the stuff is automated. It just happens. You don't
even have to think about it. And I think that is
a big shift. So going from rsync
to something which not only
does it for you, but it gives
you a big, like a bird's
eye view. It's so valuable.
Absolutely. And some of
that, which we didn't explain
exactly how all that works, but
every time we push to master on
our public GitHub repo, Concourse kicks in and runs
that pipeline for us and does a redeploy. So we very much have that
desired from a developer's perspective, just let me git push and get
out of my way. But we also have the insight. Like you said, there's a
dashboard that Concourse provides.
We can go and watch the pipeline run, watch it fail, watch it get stuck, whatever it's going to
do. Most of the time it passes and react as necessary. So that's very cool. We've got the
backups work very well. We have notifications that work very well with regard to deploy success and failure,
site down, we're using Pingdom for that.
There's a few other checks in there,
not just for the website being pingable,
but also for some performance,
like how many milliseconds does it respond with,
that kind of stuff, which is all provided via Pingdom.
All of our logs are in a single place,
which is really nice.
So anytime there's something going wrong, we can go to one dashboard, which we use PaperTrail for that.
Everything logs to PaperTrail instead of into a syslog or into their own containers or what have you.
And so we have a unified LongEat interface, which is great when things go wrong as well.
Those are all features that I'm not sure
if we mentioned all of them so far.
And all that works really well.
So in terms of things that work well,
all of that works really well.
I like when I push to master and it just goes.
It's really great to be,
I'm less in backend code like you are, Jared,
so I play more of a wingman,
very much an assistant on our application,
mostly in the design front,
usually content front, things like that.
So it's nice to not have to have done something complex
to be like, okay, can I deploy?
Whereas in past applications,
without this kind of CI pipeline,
I had to have an ssh key and
you know my machine to be configured properly or something like that whereas now it's like okay i
get pushed to master and i walk away exactly and that's just like that's like the surface
and every single ci which is like the modern ci will do that for you but concourse goes a lot
deeper than that.
And a lot of the features which are like at its core,
we're still not using.
For example, every single build runs in its own container.
All the containers that we use to run the builds,
they are using Docker images,
which we produce, which we maintain,
and they're like on our Docker Hub account.
So it embraces this concept of container,
and everything that runs inside Concourse does so in containers.
Now, when it comes to Docker and the image format,
most don't know that Concourse actually is using Garden.
So it's not using Docker.
It is using Docker images, but it runs the containers inside Garden.
And that's actually the containerization technology behind Cloud Foundry.
So it just goes to show that it was a choice that we made early on, and we made it knowing this fact, or
me knowing this fact. And we have this
Docker image format and the container
image, which allows us a level
of flexibility and a level of repeatability, which was not possible before. And that's why whether
we use Docker or something else, it doesn't really matter. And we choose and use different things
based on what makes sense. But the container itself, I think, is here to stay. So whether
it's going to be Cloud Foundry or Kubernetes or whatever
else, the container as a concept
is very, very valuable, and
we are definitely
on the bandwagon,
on that hype, the container hype.
We're on the container hype.
Exactly. Yeah, get on
the hype train. So real quick, because we're short
on time, Gerhard, in your eyes,
and I'll share one as well, but what's
something that isn't working so well
or didn't work well
through the process?
I think one thing which
I wish... Sorry,
do you want to go first?
No, I was just going to make a joke about
my SSH keys. Go ahead.
Yes, that's the one thing.
Jared was too old school
for some of the new stuff.
A lot of my,
my SSH key
was so old school
that a lot of the
more recent version.
I do remember that.
Yeah,
yeah.
It was,
I literally couldn't log in.
I got rejected
because my keys,
my keys were too old.
By,
not just by our new stuff,
but by other servers
that I've been managing.
All of a sudden,
I could no longer log in
because I think Ubuntu, our recent kernel update or something, recent version of Not just by our new stuff, but by other servers that I've been managing. All of a sudden I could no longer log in.
Because I think Ubuntu, our recent kernel update or something,
recent version of the SSH daemon, would not accept my keys any longer.
So talk about kicking and screaming.
Gerhard actually rolled his eyes pretty hard when he realized I'm going to leak my SSH key.
He says, how often do you rotate this?
And I said, rotate it? Why would i do that oh boy then i gotta change every i gotta change it on all the servers i control
but that was a fun one yes that that you're right that did not work well that did not work well
no um i think one of the thing which um i i i wish worked better i I mean, it didn't work so well,
is that this homegrown stuff,
the homegrown stack,
it had some pain points,
specifically about some components
not interacting very well together.
We had the Docker incident, I believe.
And I wish,
well, it's a difficult one
because the pipeline and our system could not have like
healed itself so to speak i mean healing just basically means stop it and start it again
the majority of the time is actually fixed it um i don't know i think i haven't used it as much as
you have jared so i think you or adam would be in a much better place to answer this question.
Right.
Yeah, so I think from our perspective, and mostly it's mine because Adam, like you said, plays more on the front end.
And when things are not going so well, it usually means it's dealing with them.
I think that mostly it's not really technology that didn't work well.
I think that it's knowledge transfer. So we have everything in the Pivotal Tracker that we did and that we set up,
but it's not like it prognosticates problems and how to fix them, right?
So basically, it's that because you are the architect of this system
and I've been following along,
but not necessarily internalizing every aspect of it like you have,
when things do go
wrong i'm very much either uh you know kind of troubleshooting in the dark which i've been doing
that for many years now so i can you know i fix i can find my way out of problems but then i feel
like oh man now i gotta bug gerhard because i can't figure this out or I'm afraid to live update this or stuff like that.
And so mostly it's social and not technical where I've felt that feeling of like, oh,
I don't want to nag this person, but I don't have the confidence to make this change.
I just want to make sure that I'm correct before I do it.
And so I've had a few of those instances, but not many.
Well, in response to that, I do have to say that I really enjoy this stuff.
I really love infrastructure
and how everything fits.
And that's why I just love doing it,
even in my free time, right?
Because I learn things
and I do things differently
than I would do in my day job.
I do hope that not before long,
we will move to a hosted system.
So if we moved away from the infrastructure
as a service and we move towards a platform as a service such as a hosted kubernetes or hosted
cloud foundry and again there are many companies which do that some better than the others but the
point is if you have people that you know manage this and it's like a well documented and well
known process her, for example,
that was like the first maybe platform as a service, the first popular one for sure.
So if you use something like that, some things would be easier and you wouldn't have these
problems. But at the same time, you would need to invest a lot of time moving your workflow,
which requires like local state and local storage to something that everything is stored in different systems
and you have stateful services
and you query them and stuff like that.
So it would complicate the application a lot, right?
And you need to do things differently than you do today,
which you're very comfortable with.
So you have to ask yourself, what do you want to change?
Do you want to change how you deploy things
and how everything fits together?
Or do you want to change your day-to-day change log workflow?
I just want to change me.
You want to automate you.
I just want me to be...
No, if I just...
The more and more I get it, that's what I was joking earlier about.
I'm just learning this stuff, Adam, is because I am.
I'm figuring it out.
I'm getting to more.
I move slowly towards a level of autonomy where I get the confidence that you already have and I know the solutions that you may already know.
So when things do go wrong, that I just understand how to take care of them.
That's what I would prefer.
That's why it's not a technical issue with the process or the solution or anything.
It's just a kind of an institutional thing that takes time to fix.
Because I don't love infrastructure and management of those things, but I do enjoy them enough
to keep things simple
and to have us focusing more on what we love to do,
which is really producing content
and creation around software development
and enriching the lives of developers.
I just want to get the confidence that you have,
and I think over time that just happens for me
so that's what I want
and if I don't want to do that
then I
oftentimes for my clients
I just have them set up on Heroku
because that way they don't rely upon me
to play that role that I'm currently relying upon
for you
I'm stuck, help me
so it's great in that sense but for us currently relying upon for you. It's just like, I'm stuck, help me.
So it's great in that sense.
But for us, I love that we can have so much control and so much, you know, get our hands dirty
with regards to how we run everything.
That'd be an interesting conversation to earmark, Jared,
is the developer experience aspect of like Kubernetes
or the Gerhardt way, which he
did for us, or
something like Heroku, which obviously
they're all sort of different workflows
for different types of applications and
where the pros and cons lie.
Yeah. Well, I think
we're hitting up against our time here, Gerhardt.
This was a great conversation. Of course,
maybe we didn't say it, but thank you so
much for everything that you've done for us and with us.
It's been a heck of a ride,
and I'm glad that we've made a new friend out of it.
In fact, I got to visit Gerhard in London
when we were over there for OSCON,
so I got to go to the Pivotal offices
and play some ping pong with him,
show him how to play ping pong.
I remember it differently. I remember it differently i remember it differently but go on you're the guest sorry sorry sorry you're the host i'm the guest so
we'll just go with it that's right it's our show so we can say what we want um and so yeah just
from from us and from the changelog thank you so much for everything that you've done it's it's
really awesome any any last thoughts from you on this or uhelog. Thank you so much for everything that you've done. It's really awesome.
Any last thoughts from you on this or key takeaways or anything you'd like to say before we close up?
I've enjoyed it more than you think.
So I got a kick out of it because I got to help you
and I got to see, validate some of my ideas
and some assumptions which I had.
As a key takeaway to our listeners,
I would say always, always focus on what you're trying to achieve. That would mean knowing what you're trying to achieve
and do not focus on the technology and don't focus on your tools because they're a means to an end.
They're not the end in itself.
You have to know them and you have to love them.
But they're just there to help you.
Well said.
Well, Gerhard, thank you so much for your time today.
Thank you for all you do for the open source community.
And especially thank you for what you've done for us here at Changelog.
It's been awesome working with you.
And thanks for coming on the show today, man.
It was my pleasure. Have a good one, everyone.
I'm sure I was ranting
for a while, especially when I go into the woods.
Just cut that stuff.
Just honestly.
Alright, thank you for tuning in to
the ChangeLog and also thanks
to our sponsors who make the show possible.
Sentry, Hired, and also Microsoft with their Azure open dev
conference.
Also thanks to fast.
They are bandwidth partner at the facet.com to learn more.
We host everything we do on Linode servers,
check them out.
Linode.com slash change log.
You can find more episodes like this at change log.com or by subscribing wherever you get your podcasts.
Thanks for listening.