PurePerformance - 067 Redefine Testing for Microservice Delivery Pipelines with Chris Burrell
Episode Date: July 30, 2018How can you get your build times down to minutes? Exactly: eliminate the largest time consumer! At Landbay, where Chris Burrell heads up technology, this was end-to-end testing.Landbay deploys their 4...0 different microservices into ECS on a continuous basis. The fastest deployment from code to production is 6 minutes. This includes a lot of testing – but simply not the traditional end-to-end testing any longer. Chris gives us insights in contract testing, mocked frontend testing, how they do Blue/Green deployments and what their strategy is when it comes to rollback or rollforward.For more information watch Chris’s talk on 6 minutes from Code Commit to Live at µCon and his lightening talk on CDC Testing is Dead – Long Live Swagger.https://twitter.com/ChrisBurrell7https://skillsmatter.com/skillscasts/10714-6-minutes-from-code-commit-to-live-you-won-t-believe-how-we-did-ithttps://skillsmatter.com/skillscasts/11147-lightning-talk-cdc-testing-is-dead-long-live-swagger#video
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello, everybody, and welcome to Pure Performance International Edition.
My name is Brian Wilson, your co-host and as always with me is Andy Grabner coming to you live, recorded, from...
Where are you today, Andy?
I'm in lovely Krakow in Poland.
Ah, yes. Yes.
Very good.
Very good.
You've been traveling quite a lot?
Yeah, I've been traveling.
Well, it's my second time here, but I've been traveling quite a bit in the last couple weeks and traveling continues for the next three weeks.
And yeah, but it's a lovely city here.
A lot of history, which is true, obviously obviously for a lot of places on this part of
the atlantic on this side of the atlantic um but yeah happy to be on the show again and brian
today we have a special guest well we always have special guests as we say right right but we like
to single out some of them to make the others feel worse i think before you Before you go, I did want to clarify the international edition that I said before.
It's really meaningless.
We're almost always international.
But I felt now that you're traveling and you're based in Europe and our guest is also in another European, although they might not like to consider themselves European, another European country.
We have a very, very, very widespread representation today.
Yeah, exactly.
And with that, Chris, are you there with us?
Yes, I'm here.
Perfect.
Hey, Chris, do you mind giving a quick introduction about you, who you are?
And then I want to kind of come back and say why we have you on the show.
Yeah.
So, Chris Burrell, I'm currently in London.
I work for a startup that we've been
going for about four years and we're in the fintech space. So peer-to-peer, which I believe
started in the UK, then went over to the States and then had a kind of rebound in the UK. So
I joined them about two and a half years ago and kind of head up the tech here.
Yeah, that's pretty cool. And you have, I mean, the two of us,
we met a couple of weeks ago in Barcelona
when you were actually giving a talk at Perform
at our conference.
And you explained how your architecture looks like,
that you moved towards microservices,
but you focused a lot on your build process
and your CI-CD pipeline.
And then one thing struck me,
this is why I reached out to you
after the show, after the presentation said, we need to get you on the podcast because you said,
you know, we kind of got rid of testing in, or at least end-to-end testing in our pipeline,
which allows us to actually build in a matter of minutes. And so that struck me because we've been
talking a lot about testing and how important testing is and testing, testing, testing, and now you stand there on stage and basically
say, well, we don't need this end to end testing anymore. We just deploy into production, I
guess, and then figure out based on the feedback, if it's good or not. Now I know it's not that
dramatic or maybe it is. So I would like to pass it over to you and maybe explain a little
bit how it got to that point.
If you've always built, let's say, within minutes or if it was different before and then how you got to the point where you are right now and what the best practices are and lessons learned.
Yeah, absolutely.
So I wouldn't say we've got rid of testing.
So that would be quite scary for our customers.
Like, what are you doing?
We need to know our money is safe.
But what we have done is rethink the kind of testing that we do.
So in previous companies I worked for and also at the beginning of LandBay,
you've got the standard thing where you deploy to a staging environment or a UAT environment.
Then you kind of run your selenium tests.
We had a selenium grid running thousands of tests there.
And then once that's ready, you get a bit of feedback, maybe decide to break the pipeline and start again because the tests have failed.
Sometimes the tests aren't quite stable, so some people run them two, three times to see what's the best outcome.
And eventually you would go to production. and really that's uh that's kind of been driven from um different waterfall strategies usually
and then it's kind of poured in i think to agile uh but if you look at a pipeline as a whole i
think we well we looked at it and our testing was like 45 minutes 50 minutes and uh even with
some paralyzation um and that that's just too long for us. We don't really want to be remembering who kicked off a build when
and what features are in the build.
So that's one side.
The other side is actually
when you're looking at microservices
and lots of different components
going in all at different times,
we didn't really want to create a dependency
on all of our microservices
kind of reaching a point where everyone stalls.
Because that in itself creates delays.
That means you need a release manager.
That means you need to know what's on your environment
at any point in time.
And so the faster your deployments are,
then the faster, well, the less headache you have, really.
So that's where we started.
Cool.
And so in the,
I'm actually looking at a blog post right now that says skills matter, and I definitely want to put the link up there on the podcast page.
And you were mentioning – and as you just said, right, you had a starting point, and it seems the starting point or the pivotal moment – and I'm not sure if we can – if this was an accurate conversation but it it reads very
interesting when you look on this page where you say the developer says I'm done with Andy shall
we shall we act it out shall we do it shall we act it out okay do you want to be the developer
you'd be the developer because I could do a good boss voice yeah okay I'm the developer you're the
boss and then at the end obviously we're all the team okay okay that's good okay I'm the developer, you're the boss, and then at the end, obviously, we're all the team, okay? Okay.
That's good?
Okay, I'm the developer.
Okay.
Hey, I'm done with the code.
I just need to test and deploy now.
Great.
When can I see it live?
I believe at the end of the sprint, right?
Two weeks?
Why does it take two weeks to put a few lines of code into production?
Hmm.
Actually, I might have found a way to make it six minutes.
Sorry, everybody.
It's okay.
Isn't there...
Actually, I just thought about it.
If you remember
Star Trek, wasn't it always it takes me two weeks, I just thought about it. You know, if you remember Star Trek, wasn't it always, you know, it takes me two weeks.
I give you an hour and then Scotty did it in two minutes anyway.
Yeah.
It would be like another cool extension to this.
And I just realized for Chris's sake, I should have channeled my Baron Vilas.
It was Silas von Greenback from Danger Mouse.
Danger Mouse!
Anyhow, that's my Britishism for you.
Sure. Sorry.
I don't know if you know that one. That might be before your time.
I grew up in France, actually,
so a lot of English stuff
is lost on me.
There you go. Okay. Anyway, back to...
So, yes, six-minute deployment.
So, yeah, Andy, go on.
Yeah. So tell me, Chris,
what else? what did you do
what were the individual steps that you really did and what are some of the lessons learned
yeah so i guess the first thing we would look at is um time to build right so the the default
stat jenkins kind of tells you exactly how long it takes but if you can kind of stage it
um you know exactly um where your time spent And I think everyone would agree browser testing is basically the slowest component.
So based on both headaches of stability, having to deploy all of the services,
having to kind of get to the same time and kind of thought, right, what can we do differently?
And everyone knows that the faster the tests are, the better for the developer.
And the earlier the tests, the less good it is for the customer,
it's less representative.
So we actually took that approach by saying,
okay, well, we're going to test contracts.
And there's a whole host of patterns out there at the moment,
like consumer-driven contracts,
which is kind of the acronym CDC.
So we looked a little bit at that.
That wasn't so mature when we first started.
But essentially what we've done is we've broken down our services to contracts.
And by contract, we mean a REST interface usually or a message interface.
And then we'll test black boxes what those contracts should be.
So I'm sending an HTTP request and I expect an HTTP request out that we can mock
or I send a message and I expect an HTTP request.
So we've built a little bit of framework with Cucumber
and some of those common tools there.
But really, those calls are much faster.
So rather than having to stand up a browser and a Selenium grid,
what we've got is we just spin up the Docker image
and fire messages at it, and it's very, very, very fast.
So our fastest service at the moment
deploys in about six minutes
and our slowest one deploys in about 12, I think.
And that's because it adds a whole host of other stuff
to EC2 like an API gateway.
And so with AWS.
So really cutting out that bit and thinking,
right, we've taken a lot of risk by cutting out.
How are we going to mitigate it? And ideally earlier because it's faster um but you may also need to mitigate some
stuff um later and so kind of monitoring tools and fast feedback looks so quite quite important to
that so when these contracts are interesting because it reminds me a little bit of what
acacia cruz has been telling us what they do Google, and also then kind of the concept that we have, which we call monitoring as code, where coming back to what Acacio said, he said when they develop a new service, they also sit down and then then who is allowed to call it and what else can this service call.
But then also they list the key APIs and endpoints.
And then based on that, on the fly, generate the test scripts,
which are then automatically getting executed as part of the pipeline.
It seems you're doing the same thing.
Yeah, so we're doing something very, very similar.
So we're using Swagger to generate our contracts.
So we write our contracts in YAML
and generate a JavaScript client, say,
or a TypeScript client for our Angular apps
or Java clients.
And that means if we're versioning that,
we can actually test backwards compatibility
against those at build time.
We don't even need tests, right?
You can check that if you're adding a parameter,
that's okay.
If you're removing a parameter and there's any clients using that contract then you can check that that fails at
build time so that's one we're testing it and obviously compile time is the best thing for
developer because he doesn't need to wait at all and then the second thing we're doing is basically
agreeing at design times the developer pick up story and with the rest of the team
kind of working out what's an appropriate API,
what's an appropriate interface for this.
And by that, we've kind of got models for our events
and we've got the swagger, so the API.
And so then you can just test it.
You can just say if I'm expecting a REST call
with X parameter with this kind of value here, and if the database is set up over in this way that we're mocking out, then this is response.
And so this is, yeah, it's very quick.
It's much faster than having to put, say, four microservices live and set up a whole load of test data and ensure you've got some login accounts for your users and then making sure the tester interact with each other which is a big problem for us and then you have to actually navigate
five six different pages in your site before you reach your use case then you click your button
and then you've got your result and here we kind of skip the entire thing
and so for our front end as you mentioned you've got some front end services and there what we're
trying to do is mock out all of the REST calls it makes.
And then we might drive some of those use cases with something like Selenium.
But again, that's very fast because it's a mocked data layer there.
So you basically really eliminate the whole challenge of building a complete system and having sure that all of it is connected and they're unavailable. So you really just look at each individual part of your architecture, front end and
individual microservices and you test them in isolation based on all the contracts.
And if I understand this correctly, Chris, and correct me if I'm wrong, but
what I understood, if I have, let's say, I have a new build,
I made a change to an API, you generate the new client so with that new new
client you can test the new version but you can also use the i would assume the client that has
been generated in the previous build so you always test if the previous version of the client which
may was generated for the previous version of the api also still working? Or how would that be?
Do I assume this correctly?
There's a couple of things there.
So the stuff we're actually working on right now,
so testing that clients are backwards compatible.
So if you've got your contract,
and well, actually, let's take the server case first.
You've got your contract the server's abiding by,
and you want to enhance the server. server might have multiple clients right so this the server wants to check
that what it's pushing is backwards compatible with itself um and then it can make sure that
all the clients are okay with that from a client perspective when you push live um we we try and check um on on jenkins when it builds that
um well we've defined the client only to know what it cares about so that that's that's key so that
if the server um changes like adding parameters the clients don't really need to know about that
until they use the data and so we will check uh the client's contract against the server deployed contract, not
against any code base.
So we can just, when the servers deploy, you can track what the latest API is.
And then when the clients deploy, you just get the latest and use it.
So there's some tools that we're using called Swagger diff, which is kind of a, I think
it's a Ruby plugin. And you just give it it two swaggers and it tells you where they're compatible
and so some of that work we've done some of that we're actually doing um this sprint and next print
oh that's pretty cool okay let me so i i've um i guess i started my work in computers in 2001, starting with some functional QA, quickly going into performance.
And, you know, I've got to admit what you're describing makes, you know, I don't do that stuff anymore right now because I work for the lovely company Dynatrace and got away from all that.
But it makes me very nervous hearing what you're saying with the past that I have.
And now I got to ask, obviously, you're here talking about this.
You've done some presentations.
You're still doing it.
So you're making it successful.
But I'm curious to know, you know, maybe obviously you've probably had some stumbles along the way.
Maybe not.
But if you have, what have those stumbles been?
And do you feel that you have it really down to a science now? Or do you do you do you sometimes wake up at night wondering if this is all going to fall apart? Because that's that what that's what was at least from from that, at least my background of history of coming from the older style of doing things. Yeah. So let me actually add a bit of context around my career.
So when I first arrived in London,
I worked for a company called Detica for seven, eight years.
And the first ride I did was a functional tester
and automating Selim tests
and actually then doing performance tests
with Lode Runner and things like that.
So I've kind of got a,
what do QA people think about? And what do do they care about and what do we want to test?
I think the question isn't so much are we doing the testing or not?
It's where are we doing the testing?
And so you could argue that if you've got a separate dev team and then a separate QA team,
what you've just done there is introduced a whole pile of risk
because the developers know intricately
how the thing is supposed to work
because they built it.
But the QA actually person
looks at the business side, right?
So they go back to the business requirement
and check that the software developed
abides by the business rules
or requirements that you've kicked off what we've
tried to do actually in our team is to make our engineers responsible for both so in our
recruiting our first question is what do you know about landbay what do we do um because if they if
they don't know we just won't progress to the technical side so i think one thing around that
risk is actually eliminating some of the process and the people issues. All of our people care about why are we developing this thing,
as opposed to, say, one of those old school waterfall models
where you write a 150-page functional spec and then you hand it over
and then it's good luck to whoever picks it up.
But yeah, in terms of the risk,
there have been some instances where bugs have gone live.
I think that's pretty much in line with my previous experience. Yeah, in terms of the risk, there have been some instances where bugs have gone live.
I think that's pretty much in line with my previous experience.
Depending on the level of quality that the company you work for wants, there's kind of varying degrees of testing and stuff. But I think I would argue that doing it, relying on end-to-end testing by then not doing other stuff doesn't necessarily make the quality better.
So a lot of programs that I've worked on previously, so I worked on some big telcos and on HMRC, people run out of funds.
And the first thing to get cut is the testing.
People want to go live. So actually, if you're doing the testing as you go
and everything is kind of feature-driven,
and a feature for us might be three days and then it's live,
so you've actually tested the whole thing
and you can spend that time without actually compromising the program as a whole.
So yeah, so I think it's a different paradigm shift,
but I'm not sure we're introducing more risk.
And I think we do use Dynatrace at the very end
to kind of, we run our web checks
to make sure the site's up.
And there's some key functions that we test.
And then if users are experiencing issues,
then we get them.
But ultimately, I think the question is,
what is the impact to the user?
And you're happy with that.
Because if a picture is like two pixels to one side,
you're not going to be able to justify to anyone,
oh, we should write, we should spend five, 600 pounds
on getting some testing that just tests those pixels.
And so it's always going to be a compromise
of money you spend versus the quality you want.
And I think we found the money spent earlier in the process
actually gives us higher quality.
And I think some of the difference in what you're doing is,
we've heard about a lot of people not doing,
and we do it ourselves, doing a lot of testing early, right?
Not always doing end-to-end testing,
but I think a lot of people so far have been doing more of a hybrid approach
where they're not testing everything end to end.
But every once in a while, there's some sort of end to end test.
It kind of sounds like you guys have just eliminated it altogether and saying we've really solidified what we're doing on this other side that we we just really don't need need that end to end.
I think that's a big leap that you've taken that others just haven't had. I don't even know if it's the confidence,
the setup or what it might be to just get rid of that end test. And it's probably not for
everybody, I'm sure. Yeah, I think one thing is the culture in the tech community that's kind of
evolved. So we used to have waterfall projects. Now everyone's kind of thinking, right, agile's
the next thing and we need to move to agile and but then the contracts haven't moved that way and so if you you work for big consultancy
then it will be kind of a payment milestone after you've kind of worked out what the requirements
are a payment milestone when it's ready for the customer to to test and the payment milestone
once it's live and the testing seems to mirror that contract which is quite a waterfall way of seeing things um whereas if you get away from that contract then
you kind of think well actually how did we do it before well we tested things as we went and we
put things live when they were ready rather than waiting for entire release cycles of like six
months nine months and so i think microservices is going to probably accelerate this.
And it's kind of a prediction, I guess.
But the more microservices you deploy,
the less you can wait for everything to be in one place before you test it.
And then I've got to ask, you're talking a lot about microservices.
Are you all starting to eye functions?
Yeah, so we started actually this week um for a function um
i'm i'm in two minds about kind of serverless um stuff um i think my main concern is we've got like
40 microservices right now um but that would be tens of thousands of functions and so from a
managing point of view how do you remember there's a function x hanging around that's doing critical
piece of system and that's kind of what the the big argument against corporate and legacy codes
like there's this system that no one wants to touch because no one knows whether it does anything
or whether it's like it's going to take a bank down and so i think i'm waiting to see what the
kind of dev environments are going to look like what the management around those there's been a few interesting blogs on the register website around um security and a lot of the
serverless stuff is like you deploy i don't forget about it but if there's a security flaw in there
then you really do need to kind of mop it up and remove it at some point so yeah i'm i'm in two
minds i'm not against it i'm not for it now. But I think there's still quite a bit more headway, I think, to make on there.
Hey, can I – I mean,, quote unquote, easier to focus on testing later or just,
you know, ignore the end-to-end testing is because you are deploying rather frequently,
like very often.
That means the changes that you are deploying are much smaller and therefore the need for
doing full end-to-end tests of all the suites are obviously, the need is not that big because
the smaller the change, the less the impact, right?
At least, you know, that's the way I would
explain it. And
what would be interesting for me
though is now, Chris, when you do
deploy, you said you do web checks.
So that means you're doing some
synthetic tests in production to make
sure that at least the base functionality,
I guess, or certain key use cases
are still available and working.
But when you deploy, do you use something like Canary releases or do you do blue-green
deployments for individual services?
Is that what you are doing?
So I guess some people would say this is a cowboy approach, but we release kind of blue-green
but very very quickly so we keep the the the live
services up until um 100 of the new services are up and then we take the other ones down
but we don't um do blue green in the sense of testing traffic at the moment um partly because
we're a startup there's not a huge amount of traffic to help to test that whereas i know
some people have got kind of metrics that they'll collate and feedback. So no, so our deployments will go straight through.
But what we do test is on, we do go through a UAT environment before production.
And there really what we're testing is the deployment itself. So our database migrations
are part of the pipeline. And then we'll check that the service obviously comes up the data migration works and the the health checks um happen and so we might add a few health checks
here and there for some critical pieces of functionality but yeah so we go through uat
check that works and then we'll go to production and we have added some stuff um in there so
occasions it's a fairly major release of uh something that can't be broken down
and so we do have a manual flag so we can turn that on on demand but on the whole 90 percent
where 95 of our releases are automatic kind of uat prod done and and uat prod though is a separate
deployment it's not that you're switching because there will be blue green anyway so you really deploy into uat with your test data i would assume and then yeah correct yeah so we've done a lot of
work around being able to get our test data back and anonymize whether that's for a developer or
for the uat environment um we can we can take it put it back in an environment that we want
and so when when we do the deployment most of our deployments obviously when most of our features
will will be new stuff.
So we won't need that much data to be representative of life.
But occasionally we've taken the data back.
That's a manual process at the moment.
I'm looking to kind of automate that.
What's your data store?
Is that RDS on AWS or what is it?
Yes, it's mostly MariaDB on RDS on AWS or what is it? Yeah, so it's mostly MariaDB on RDS
and then we've
investigated a few times
DynamoDB and Mongo.
We had a bit of Mongo before but
that's been written out as in
we used it and then we rewrote something
and lost it.
And DynamoDB
we almost went for it and then actually
saw the complexity around the DevOps
and the controls, access controls,
weren't quite granular enough for us.
So that's a future thing.
We've got a few other data stores
like Elasticsearch or S3.
It's kind of debatable whether the data stores or not,
but Elasticsearch, I would say, probably definitely is.
And when you test the updates,
you said you have your database updates.
What do you do in case a database update fails?
How does that work?
Yeah, so if they fail,
then they're most likely to fail in UAT.
Our database updates are always going to be
backwards compatible, and that's something we enforce
both at the kind of design phase and the code review
before someone pushes
the merge button.
And then we use a tool called Flyway,
which kind of checksums the upgrades,
so you can't change a previous migration, for example,
and you have to run them in order.
So we've got a few controls around that.
If it does fail, because we've kept the live services up,
we're actually okay, and the new container is going to, I guess,
be cycling. It's going to start the migration, but we'll get Slack notifications to make sure
we get to that very quickly. And also the Jenkins build will fail. So someone will notice that very
quickly. But the service as such isn't down because of the blue-green deployment side of things.
That's pretty cool.
So that means, you said your fastest service,
it takes six minutes.
So that means six minutes from I run the build,
which means it builds it, it deploys it in UAT,
it does the web checks, and it then takes it
and does a blue-green deployment in production.
That's six minutes?
So, yeah, six minutes will be the build, so Java build.
We use Maven, so we could actually look at splitting up a bit,
maybe with Gradle eventually.
Then the service testing, so all the contract testing,
which is absolutely key to making sure the quality is up.
Then, sorry, and before that, the build of the Docker image.
Then the deployment to UAT,, then the deployment to UAT,
and then the deployment to production.
And so, yeah, that entire thing is six minutes.
And then the first web check will run within the next five minutes of that.
So it could run immediately afterwards
or it could be up to four minutes,
four or five minutes.
I think we're going to have to tell Bernd
that Anita that, Andy.
Yeah, I know.
New challenge.
Yeah, yeah. Five, six minutes to production because we always you know we talk about one hour go to production we need it in six
minutes i'll do it too and i think so just one of the things is that i've often been asked like do
you really need six minutes and for features it's not like someone's going to say oh
i want it six minutes earlier or i want it 30 minutes earlier if they've waited a week for it
to even be prioritized or get onto the backlog then they can wait a further 50 minutes right
but for for fixing things that's actually quite useful so if there's an emergency
thing that that happens and it happens everywhere then actually our time to fix has now been
reduced to the fix time pretty much plus five minutes to get it through.
So from a support point of view, that's actually quite a big asset.
Yeah, I think that was the same motivation that we had, right?
I mean, we release features every other week with our sprint releases, but the idea is
having a fast lane into production for exactly these emergency fixes.
So, Chris, one last question on this.
If you are – so you do your blue-green deployment.
You said it's rather fast.
So you really just basically shift the load and that's it.
What happens then if a problem happens in production after a while, say after an hour,
is that do you have any automation mechanisms to potentially roll back to a previous version?
Or do you just roll forward basically because you know you can fix things so fast and then push it through within minutes?
Yeah, so at the moment, it's the latter. So we will get notified quite quickly,
either because users are experiencing something
or we've got recognition dashboards
or support portals that kind of do automated checks
on the platform, as well as our Dynatrace solution.
At the moment, the view is roll forward.
We can roll back because all it is,
is we keep the versions of every image
that we've ever deployed
and then maybe archive them after six months.
So if someone were to log on to the ECS console,
which is what we use in AWS,
it's trivial.
It's a minute to, well, it's a few seconds
to change the revision down one
and then the service will be back to where it was.
If you're doing that,
you need to be really careful
because we can guarantee backwards compatibility., if you're doing that, you need to be really careful because we can
guarantee backwards compatibility. So when you're deploying something new as backwards
compatible, but people often forget that you can't always guarantee front compatibility.
So if you've moved your data model on, then rolling back can actually be trickier than
rolling forward. So at the moment, we mostly rely on rolling forward. And really, if you
think about it, the only things that
are absolutely major that
you kind of need to fix really quickly
is our user visible issues
and even then
you can restrict that so for us it would be
kind of financial issues as well as kind of
logging into the platform those would be absolutely
vital and we need to get those right
every time
but there's a few other things
where you can think well actually can it last another 10 minutes 20 minutes it's probably it's
probably fine do you do any performance testing at all or is that um so not not at the moment so
we we have done um for specific things um so specific use cases and we'll kind of test that
particular use case so that's maybe where we'd use this manual approval flag, where we'd get a bunch of stuff
into a representative environment, and then we'll test it.
Some performance tests you can do locally, but that's not as good in terms of latencies
and stuff like that.
So for example, we've got a first of month run for interest.
We expect lots of interest to be paid on our peer-to-peer platform and therefore we've kind of done we've kind
of rewritten that bit a few times in different ways to optimize it and so
there we've done quite a bit of performance testing but it's more ad hoc
than in our pipeline at the moment cool and you mentioned the ECS so that means
you're running containers on ECS.
Are you using, you're not using Fargate yet, right?
No, we're not.
So Fargate got announced just after we bought all of our reservations for EC2 instances.
So it's kind of unfortunate, really.
We would consider using it.
And I think the idea of having a kind of userless environment is quite appealing, both because hackers can't get in from a security point of view.
And also Fargate in particular, obviously AWS will be supporting it and patching it.
So it's less work for us to do.
So we'll kind of look at it.
But at the moment, our current infrastructure is paid for with reservations, but we're launching on a fairly large new project.
So that is one thing we're looking at for this.
Fargate's kind of their Cloud Foundry-esque one, right? Is that right, Andy?
No, Fargate is basically, you don't have to worry about the underlying infrastructure that runs your containers so far it is they run containers for you
and they manage the underlying let's say infrastructure like the ec2 instance that
you would normally provide and put on into the cluster so that's why um you know container is
a service or you know it's i'm not sure what what the the official term is yes it's a little bit
bigger than a function and a little bit bigger than a function
and a little bit smaller than a kind of container
running on yourself.
So you pay by CPU and memory
rather than by time of the instance.
Yeah, cool.
Hey, Chris, I mean, this was very insightful
and I know we could probably go off
and talk more about other things that you'd like.
I'm really interested in the whole contracts and how that fits in.
But for the topic that we have today, how did you end up, how did you redefine, as you said,
how did you redefine testing in a microservice world?
How did we get build times down to six minutes or deployment times down to six minutes?
Is there anything else you want to let the audience know
before we wrap it up?
I think the only thing really is you want to be checking
that where you're spending your money is the most appropriate thing.
So as architectures evolve,
then I think it's worth re-looking at testing, for example,
but as well as architecture and development and all of that.
So I think testing just needs to be kept in step with new architectures.
Great.
So, Andy, do you want to summon the Summaryator then?
Let's do it.
Summon the Summaryator.
Sure.
So, I mean, I learned a lot today.
And really what you just said, Chris,
you have to reevaluate your testing strategy as you mature your architecture as your products
mature I really like the fact it what I learned at least today is if you are
like in your case is startup if you have if you start with a new architecture
with micro services then make sure these microservices can independently be deployed, giving everybody the flexibility that they want.
Ensuring, though, that every service is bound to their contracts, automatic contract validation, that nothing breaks on a contractual basis. I really like the fact that based on the contracts,
you also generate your tests.
So you validate with basic tests in your build pipeline
that the service works as expected.
You mentioned using Swagger UI and also Swagger Diff.
That's great to actually figure out
if what has changed between versions
and then do the dependency check.
I also like the fact that he said, well, you know, let's get it out as fast as possible
and learn from the production environment.
And in case something goes wrong, we roll forward because we can,
because we can react to a problem within minutes.
And we don't have a process where it takes us hours, days, or
even weeks to react on the problem.
This is why we do no longer need to think about necessarily rolling back, but it's always
rolling forward, rolling forward.
And the first thing, one of the first things that you actually said to reiterate that is
when you were looking at your initial pipeline, you saw the biggest time was actually
spent obviously in browser testing. And this is the first thing you optimized away. Um, so I think
a lot of great lessons learned and, uh, definitely want to make sure that we put the links up to your
presentations that he had. And we would definitely love to have you back for more talks because I
think there's a lot of, a lot of interesting aspects and a lot of interesting
things that you guys are doing
that a lot of people would like to do
but maybe cannot
do for whatever reason, maybe because they
are still
constrained by old processes or
old thinking
but listening to people like you
shows us that
new architectures also allow new processes and a new way of doing things in a different way as we used to do them in the past.
For instance, testing.
So that was great.
Thank you, Andy. We'll be just, you know, again, coming from the more the more novice side of things, just pointing out how Chris mentioned that, you know, he came originally in from functional testing and is now is doing this.
So there's always a chance to level up and and really start making strides and changes.
Kind of makes me a little bit glad that I'm not in that side anymore because I get to dabble in these things at leisure at my own leisure.
But it's it can definitely be challenging. on that side anymore because I get to dabble in these things at leisure at my own leisure but uh
um it's it can definitely be challenging but I think that you know especially in this idea of changing the testing approach changing the testing methodology not doing end to end
you know makes me think of like let's say like a compass on a ship right whereas way back before
compasses people would rely on okay what's the position of the sun?
If you were sailing at night, you had no idea where you were going.
You kind of would look at the stars, but then they started making compasses and sextants.
And you have now new technology to help you do things more efficiently.
And you would have to actually trust that that compass is pointing you in the right direction.
And guess what? It worked.
So I think in a lot of these things where you're coming from, you know, if you're a newer person coming into this and you don't have that whole history of how we used to do the functional
testing, how we used to write the requirements and go through the checklists, or even doing
the performance and load testing in the way that we used to have to do. Um, you don't have that
baggage of thought on with you or the let's maybe you can even call it technical testing debt. I
don't know. I didn't come up with a term for that, but from, for anybody who's been in it for a
while, it's yeah, I think there is a point where you have to start letting go and trusting the new
methodologies of becoming part of them, but start trusting the new tools.
There's always new tools.
There's always that uncomfortable feeling with the new tool.
But as what Chris is doing with some of these things, we're proving out that they're working.
So thanks for sharing that with us, Chris.
I really, really appreciate that.
And it's always fun to get challenged in your comfy chair.
We look forward to having you back on another episode soon.
Any final words from anybody?
No, I think I'm good.
Actually, one more thing, though, because I took some notes.
There's one more thing.
Highlighting another thing that you said, which I never thought about this.
You said in recruiting, when you recruit people, the first question you ask is,
do you know our company, what we do?
Do you understand?
Because in the end, we all work on one goal, which is making our company successful.
Therefore, we need to understand the business and therefore, we have to understand what
we do.
And I think that's very interesting.
I've never heard that before.
Right.
So if anyone's looking to hire, make sure you at least look them up on Wikipedia
or something. Do you know what we do? Have you actually had people come in and say, no,
not really?
Yeah, no, absolutely. So our process is phone screen, first interview, second interview,
and then job. And sometimes we do the face-to-face interviews on the same day but on
the phone screen it's it's amazing how many people you say okay so you're applying to a peer-to-peer
company what what's peer-to-peer and they're like well i don't know um so yeah no it happened it
happens a lot and um i was reading something on share options in the states and they work quite
well over there people don't really understand them here, apparently,
according to some articles I read recently.
And actually, I think that's the idea,
is that they're not as connected to the business as perhaps in other places.
Well, some interviewing 101 people know what the company does
that you're going to interview for.
Absolutely.
Kids these days, I swear.
Anyway, if anybody has any questions or comments please feel free
to tweet them at us at pure underscore DT you can also email us at pure
performance at Dinah trace calm check us out on speaker Chris do you have a
Twitter do do tweet um I I don't tweet much but I do have an account so it's
Chris Burrell seven.
Okay.
So yeah, that's me.
Maybe if he gets more followers, he'll have to start tweeting more and you could tweet pictures of puppies or cats.
People always love those.
All right.
Everybody.
Thank you all.
And we'll talk to you soon.
Thanks a lot.
Thanks.