PurePerformance - 022 Latest trends in Software Feature Development: A/B Tests, Canary Releases, Feedback Loops
Episode Date: November 21, 2016In Part II with Finn Lorbeer (@finnlorbeer) from Thoughtworks we discuss some of the new approaches when implementing new software features. How can we build the right thing the right way for our end ...users?Feature development should start with UX wireframes to get feedback from end users before writing a single line of code. Feature teams then need to define and implement feedback loops to understand how features operate and are used in production. We also discuss the power of A/B testing and canary releases as it allows teams to “experiment” on new ideas and thanks to close feedback loops will quickly learn on how end users are accepting it. *****Related Links:******Process Automation and Continuous Delivery at OTTO.dehttps://dev.otto.de/2015/11/24/process-automation-and-continuous-delivery-at-otto-de/Are we only Test Manager?http://www.lor.beer/are-we-only-test-manager/Sind wir wirklich nur Testmanagerinnen?https://dev.otto.de/2016/06/08/sind-wir-wirklich-nur-testmanagerinnen/
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time of Pure Performance.
Sorry to everyone else on the podcast today if I said those terribly.
But since we have our guest, we're back with Finn Lorbeer, who's out of Germany.
And Andy, you're obviously originally from Austria,
and I guess you'd probably consider yourself still from Austria, although you're living over here for now.
I figured I'd give it a shot.
I'm sorry if that was terrible.
No, it was good.
I think it's more like a real comment, right?
How would I pronounce those more properly?
I think it was really cool.
It was just unexpected.
Okay.
Well, we try to keep things going anyway we're back with
finn uh andy would you like to give a uh we you know in our previous episode we we talked with
finn um and we're going to continue talking to him today about the life cycle of a feature and
feed the feedback loop uh but andy do you want to give a brief summary of the previous episode
i'll try i'll try so basically what we talked previously is kind of the transformation journey from Otto in general in the last couple of years of going –
well, first of all, exploring obviously digital as the new channel towards the end user.
But really more importantly, breaking up the website that used to be a huge monolith into 14 separate business units.
And then within these business units, Finn, you were part of one of them, which was responsible
for the personalization feature on the Otto website. And then you basically said you came in
and you, within the last one and a half years, went from two to three deployments per week to about 100 deployments per week per team,
which was your most high-performing team that did 100 deployments per week.
Obviously, in order to get there, a lot of things had to happen.
You were basically breaking up that, quote-unquote, monolithic piece of the feature
you were working on into microservices.
I think one key piece that
you said is you brought together the development, the QA, and the ops team when decisions were made
on the new architecture. That was very critical that everybody's on board, that everybody could
basically say what they want, what they need. And then a conclusion was made. And I think you also focused and stressed the point that ThoughtWorks in general and also the companies that you are then kind of enabling always think unicorns or startups can do rapid deployments,
but also learn how companies like Otto can transform. Check out the previous podcast,
but also go to dev.otto.de. So that's dev.otto.de. And we also have to link on the podcast,
I'm sure, some great stories.
Now, Finn, one of the reasons I assume why people want to move to more rapid deployments is the flexibility in testing out new features and new changes.
Is that a fair assumption?
Is that one of the driving factors?
Yes, I would think so.
To get a fast cycle time. We are talking about microservices these days.
They are not just hot shit because there's something new.
They are also small units that you can easily handle.
And one obvious benefit is to throw out new things as fast as possible to learn as much as you can about what you develop and what software you build in the real life.
And now within Otto or within other organizations that you guys are working with,
I mean, the feature obviously starts with an idea, right?
You have the idea of this new thing that you want to change. Either it's something totally new or maybe just going to be a change in, I don't know,
the color setting of a button.
There's different, obviously, changes vary from small to rather big.
Now, can you walk me through a little bit about the best practices that you have seen
over the years, maybe especially with auto auto companies, where you are walking us through
kind of the lifecycle of a feature from end to end.
And especially, and I think this is the most important piece, the feedback loops.
Because if you look at the literature and if you talk with other companies,
the most important thing to know for a company is, did we actually build the right thing?
Did we build something that has a positive impact for end users or was it a negative impact?
And I think these feedback loops from the end user back to engineering,
but also I believe throughout the development phase and throughout the lifecycle is critical.
So if you could just enlighten me a little bit on how you typically,
what your best practices are on bringing a feature into production and which stages you go
through which best practices you have and also how do you implement the feedback loops that will be
awesome yes i'll switch gears uh to the previous podcast and specifically not talk about auto and
auto we were part of teams and there was a big auto process around getting the requirements and features out so i would not talk about this but talk about any other projects and how we like to work and how it
actually is in reality then in the end right because in the strawberry lands where everything
is wonderful um we would even have the first feedback loop and learn about a feature before
we start building it in general clients come to us and have an idea otherwise they wouldn't approach us and and ask us to help build a specific product and in most of the cases it's
well thought through and the designs are ready and all the requirements and sometimes you even
have stories written already for the single task that need to be done over the course of the next
year and then they tell us hey here's the set of stories for the next year can you do this and then we would step back and say okay we would rather like to revisit those things and talk to
you again and start with user testing even before we start building the software when we got the
user test when we got feedback from users on prototypes then we have a better chance to
actually build the right product in the long run and to go in the right direction when we've got feedback from users on prototypes, then we have a better chance to actually build the right product in the long run
and to go in the right direction when we just start working on software.
And then we have a lot of discussions with the client
what we will actually put in the software.
And as soon as we got this settled as well,
then we start with the actual development.
And if I have a wish for how my team performs wherever I am and whatever client,
I would say we get features from we write the first line of code to having it in production within five days.
So within a week to really be out there having some mechanics to to be safe but to be as out as quickly as possible to gain fast
feedback and first insights on on if we are doing things right well five days and is this now
are you i assume you do things like ab testing or you do you just roll it out to a particular
set of users to get feedback or do you typically say we we roll it out to a particular set of users to get feedback or do you typically say we we roll
it out to everybody um or how does this typically work that totally depends on the setup we are in
if you have a big e-commerce website or something with a lot of traffic it's usually and you have
the typical conversion thing going on on your page
then i always opt for a b testing because this gives you the real data when we work for clients
for internal tools you will sometimes develop and have first feedbacks from management users
and sometimes even enable very eager people in the company
that already want to start a new technology
where you would enable it for single people.
On the other hand, sometimes in companies
where you build a product for a smaller user base,
like, I don't know, 500 or 1,000 people,
it's also manageable to say,
we have a new feature here.
Check it out.
You can get to the feature in this and this way.
We would leave it there or you give us feedback on how we can improve.
This is also a way we can go.
Now, if you do large-scale A-B testing, have you – I assume you use feature flags that you then turn on, particularly on, I don't know, whatever algorithm you use, whether you do it geographically by users or whether you do it because you have a percentage of users you want to use.
Is this how you do it? So you have feature flags that then get enabled depending on – in the very beginning when a user opens up the app you make the decision is this
user going to see the new feature yes or no when they log in is this something they typically do
so in general we or i like to distinguish between features and experiments because
you i said i want to have stuff out in five days ideally that is of course some some part of a big
epic or feature and you get the
first stories out and you may need different feature flags to get this feature out in the
public or to make things work or to make data migrations work and then you can use the different
i mean choose any frameworks for ab testing and choose one choose one of them because you don't just need a mechanism to distribute the traffic 50-50 maybe or 10-90.
Maybe you want to have a framework that does that for you and you need a really good monitoring.
You really need to think about what you want to measure before because otherwise afterwards you may end up with a lot of data that doesn't help you to evaluate if your experiment was successful or not.
Yeah, exactly.
I mean, this is something where we – I'm not sure how familiar you are with all the features that we as Dynatrace provide from a monitoring perspective.
But we do end-user monitoring, and we also have capabilities where you can actually then tag.
We call it tagging visits with, you know, are they using version A versus version B?
And then also how they convert.
And we also analyze, we call it the whole user journey.
That means how are people actually navigating through your product.
And with this, I think this is interesting, analyzing user behavior.
Because obviously the idea with a new feature is you want to change
the behavior of users either converting more or using that feature more or whatever they do and so
we we try to come up with new ideas and new ways to actually analyze user behavior and then splitting
the users into hey this is a user that actually was exposed to the new features versus a user
that was not exposed to the feature.
And we have some ways that we play around with it.
But you're right.
In the end, you need to figure out what is the success criteria, right?
That's the big thing.
Exactly.
So in fact, you can, while the experiment runs and works and you'll collect all your
metrics, if you look at the wrong metrics, it may tell you nothing and the entire experiment can be a failure.
Just if you didn't think it through what to measure before, this is a really important point.
Now, you in your role, and correct me if I'm wrong, but you are, especially what you explained to us in the previous podcast, you're more on the quality side,
meaning you make sure that everything is correctly tested,
that you are making sure everything is pushed to the pipeline.
Who is responsible for providing these feedback loops now
in the teams that you're working with?
Is it the developers that need to put in the metrics
and then they need to talk with the ops team to actually capture the metrics?
Do they do it through logging?
Do they do it through monitoring tools?
Or is it the ops team that is tasked with, hey, we need to figure out how many people are actually using that feature and how the conversion rate looks like?
Can you give us some insights in who is responsible for that?
It's the team responsibility.
We have people that care about the business and analyze the business in the team.
We have ops people in the team.
We have QA people like myself in the team.
We have developers in the team.
So we don't distribute into these different departments when we have a ThoughtWorks team somewhere.
We have all the capabilities inside the team that we may need in any context.
And then it more comes to having the right conversation at the right point of time to plan it. And maybe what we call business analyst who's somewhere planning what stories come next and to analyze the business requirements that we have.
He would drive the discussion and say, we need to talk
about this. There's something coming up. And then we would sit together and maybe with three or four
people, and then we get the right capabilities, not the right roles into this conversation.
And this may be me at some point of time, because I did it before. This may be someone else,
because it's a very specific problem and someone else is better
fitted and in this way yeah we just have the conversations as early as possible to know that
we get things right there perfect yeah like i mean that was the perfect answer that i was hoping for
it's the team obviously and in within the team you have the different roles of people that... That was a trick question, huh? Yeah, that's what we call softball.
Awesome.
So, I mean, what I really like what you said is,
you know, you don't talk about features anymore.
You talk about experiments in the beginning
if you try something new.
And then are these experiments then,
I assume you basically built the,
what's it called,
the minimum valuable feature, right?
Product.
MVP, minimum valuable product.
Yes, but we can even get one feedback earlier.
So what we want to have, what we try to have in the team as well as a capability is someone taking care of UX.
Can be also the role that states ux um user experience
designer so those are not people making your web page colorful but actually thinking how to guide
people through the process on your website to make the experience overall nice and they would
ideally go out write mocks with some wireframes and go out, talk to real users that we aim for in the end.
This sounds easy and it's a really huge challenge to, before you start coding, tell the client that you need to talk to your user to show him something or her.
This is really difficult.
But this is what we ideally do to already there get an idea about how it could
look like what is the look and feel what do people understand if the button is up there or somewhere
down on the page you can simulate this with photoshop if you like but already get some
feedback even before you start implementing the button and the functionality so this is where we
would start getting the first feedbacks and doing the first experiments.
I mean, you talk about user journey mapping and all that stuff, right?
I just read an article from Amtrak, which is the train company here in the U.S.
And I found it fascinating what they did.
They actually sat down and tried to figure out what type of customers do they have that use the train. Because you have business people that use it for business traveling,
then you have people that want to go on a weekend trip, and then I'm sure you have other people
again. So basically what they said, they tried to figure out what are the, whatever, two, three,
four, five different types of users that will end up on the Amtrak website or on the mobile.
And depending on what type of user they are,
we want to make sure that they, as fast as possible,
navigate through their journey where in the end they buy a ticket with us.
But they basically figured out that for a business person that just wants to go,
let's say, from Boston to New York on the Acelva train,
it's a different thing than exploring as a fun traveler the different options because there I want to find probably cheaper deals and all that stuff, right?
So I thought that was also very interesting, and I'm sure they did some UX experimentation at the beginning, trying to sit down with the different types of customers that they have and figuring out what's the best way to navigate them from entering the service until they actually make the purchase.
Yes, and you get a basic idea about what you do.
So I'm currently on a project where we replace paperwork with an app.
And so we have no idea how to improve any behavior with A-B testing
or something like this because there's nothing yet.
People work with pen and paper and we try to help them there.
However, they're used to pen and paper since 20 years and those are people that are really doing this job since I don't know how many years.
So it's a real challenge to provide them something that actually helps them because nothing is worse than saying, okay, we got a nice solution.
You can throw away your old-fashioned paper.
We have a digital solution that just consumes the double amount of time while you're working.
And therefore, it's a really good start to get some idea about what you actually have to do. It's interesting, too, the fact that you're talking to the real users
and showing the wireframes and all,
because if you listen to at least the impression,
I don't know if it's in practice,
but at least the impression that the Facebooks and the Googles of the world give,
it's more of they do it all digitally.
They throw features into a subset and see how they're used and modify from
there. And maybe they'll move a button to the left, to the right, to the top, to the bottom,
and see how the feedback loop is. But in the real world, I guess you, there's still a need,
and there's still a lot of use cases for actually presenting these, the ideas as you're, as you were
saying to the real users to get that feedback ahead of time. Cause you know, not everybody has the, um, you know, gigantic footprint that some of these other
companies have that they can experiment in that way. Um, so the concept of still going out and
actually interfacing with human beings and getting real feedback from them, uh, it's, it's a, it's an
interesting, I shouldn't say it's an interesting
concept because that's how things have been done for a long time and it's i guess reassuring to
hear that that's still alive because in this case that you're talking about um just now right if
you're someone used to pen and paper you're going to lose them right away if you're just experimenting
with them digitally so it's i don't't know, I love that idea there.
I think it's also the realization that not every website out there is an e-commerce site or a Facebook.
And a lot of other software out there that we need to make sure we do it right so that
the end user is happy and productive.
Yes, and to be honest, looking forward, the bigger part of software won't be e-commerce
somewhere on a web page it will be in the industry and it will be driven by the internet of things
and industry 4.0 and what buzzwords you didn't have in the in the podcast that we could still
mention but this is a huge portion of software that will touch
places that haven't seen software before and we can't just improve there we have to reinvent and
to start over and therefore we need to get feedback even before we start with implementation
it's not enough to say we got a continuous integration here and and we get live after 12 days, and the wind test for another two weeks, that actually doesn't help.
Now to the features again, so that you guys, so we talked about,
you hopefully start early with some UX design, you do acceptance testing,
then the team itself is building some type of monitoring
that they can later use.
Do you have, because obviously we as Dynatrace,
we are a monitoring company,
do you have any best practices or any trends that you see here?
Are people more using logging as the way of monitoring?
Are they using monitoring tools?
Are they doing something custom built to get the feedback loops?
I would say if I had to guess for a trend, I would say we go more and more into reading from logging because we need more and more logging these days anyways.
We discussed this in the previous podcast somehow.
So we have all the information there in the best case already.
If we did a good job, we have everything in the monitoring that we need.
And then we just need to retrieve the information from it.
In former times, I've seen a lot of frameworks where you have to tap into
and get some specific tracking data.
And tracking, especially if you have to add yet another tracking yet another tracking pixel for your experiments i've seen this go horribly wrong a
lot of times and it was very painful so if i could choose on a project i would go for
the standard logging because it's it's also obviously also, obviously, it's a proven approach, right?
People have been doing logging for a while,
and maybe some of the monitoring tools, as you said,
make it not that easy or not.
Maybe developers are not that familiar
with the monitoring capabilities
of some of the modern tools out there.
So I understand.
And we see, I mean, we see the same trend,
and that's also why we pick up log messages
or we build a new log analyzer feature where we can actually analyze logs as well.
We see a lot of customers obviously send their logs to Elasticsearch, to Splunk, and then spice it up with some of the data that we have to really figure out how the features are working and whatnot. Now, one concept of feature monitoring is that I hardly ever see
discussed is the following. So let's assume you build an awesome feature and you prove that it
works perfectly. Remember, we have a background. Our background is performance monitoring and also
resource monitoring. Now, do people actually care about how, quote, unquote,
costly a feature is from a resource perspective?
And with that, what I mean, if you build a new microservice,
a new feature, and it's becoming very popular,
but if you never have an eye on, you know,
how much data do I send over the wire, you know, how big are our pages, how many CPU cycles do we consume, and you run everything in a cloud or in a virtual environment where somebody has to pay for the environment, it could mean that you're building something that is not efficient enough and you don't make as much money as you could because you only charge so much for the feature to your end users so to really come to the
point of my question do you actually not only when you talk about features monitor acceptance
and success of a feature from an end user perspective but also from a from a cost perspective
how does it cost to run yes we definitely look into it i'm often in the trap of not doing it to
be very honest i'm i'm sometimes you know when you when you often in the trap of not doing it, to be very honest.
And sometimes, you know, when you're in the middle of the feature and you think, okay, servers is hard drive and CPU.
It's, first of all, cheap these days.
But in some, of course, and with everything that you have to do
to make it run, it's obvious that it's not just a cheap thing
that you get for free.
And so I'm maybe, I know that i have to improve there i'm the one who's more reminded by the team that we have to
look into this as well so overall as a as a team experience um we definitely look into this and
think about it we for everything that we send over the wire we have to think about it i'm talking
we are in general working now in microservice environments so we send a lot over the wire, we have to think about it. We are in general working now in microservice environments, so we send a lot over the wire and we always have to think about fallbacks if it goes
wrong. So we try to make our packages small and efficient between the services for the performance
of the single services. I think the awareness is even higher these days because it's more easy to
track which one of your pieces of software,
because it is a small service, goes wrong.
If you measure performance or if you measure database query counts
or database performance, I think in the end,
going to the monitoring of your systems,
you should always know how performant your system is.
It's funny that you bring up the database example
because I remember the presentation you gave in Frankfurt.
You had, I think, one example where you deployed a change.
I'm not sure if it was a new feature or just a change.
And because you had monitoring in place,
you could see the number of database queries going through the roof.
Oh, yes, yes. That was one afternoon. We just deployed five commits or something, in place you could see the number of database queries like going through the roof oh yes yes
that was one afternoon we just deployed five commits or something and our database query count
it tripled or something and we had a really hard deadline to meet and we couldn't find out what it
was and then we just rolled back the entire thing and spent some time investigating found nothing
rolled forward the first commit the second commit the third commit and found it was a small caching fix that was rather part of a big test that someone wrote
just to add something and yeah if you change something in caching it may have big effect on
the database and we yeah immediately caught it by monitoring and the monitoring in fact helped us to
figure out what it was because just from going through the commit messages and the code lines, we weren't able to quickly resolve this issue before the deadly deadline coming up.
And that's one of our concepts that we've been promoting over the years is actually trying to find exactly these architectural changes, like bypassing a cache layer or doing
something like that earlier in the pipeline. So what we as Dynatrace do, when you execute your
unit test, your functional integration tests, we look at the dynamic code execution. And as you
said, if you go line by line, you have no clue what's going on. But if you dynamically trace
your code execution and then see, hey, this code commit is now
changing the database access behavior.
And instead of going to create one query, it creates 10 queries on a sample database
set.
That means this is going to be much higher even later on.
So we can already stop the build there.
So that's one of our things, like shifting performance left.
We talked about this concept in one of the previous podcasts. Brian, I think you mentioned
Adam Auerbach from Capital One. He was also talking about shifting performance left,
finding a lot of these performance problems. Or in this case, actually, I would not necessarily
say it's a performance problem, even though it will become a performance problem, but it's more
an architectural issue.
You have introduced an architectural regression through a code or configuration change, and you want to find that early and not having to deal with the situation like you had to
deal with, where you had to find it out in production and then roll back changes and
then one by one do a roll forward.
That's a lot of wasted time, and it's painful.
And therefore can you can
find this earlier if you bake monitoring left in the pipeline that's kind of our idea that we have
cool yeah that is that is one one point we certainly want to look more into the other thing
i don't know if you have any experience on this would be for i mean you will never find all the things so
what i always dream of for some future project would be and monitoring or a pipeline that reacts
to the monitoring let's put it like this so i don't want to be responsible myself to detect
this change because i'm always slower than software so i would like my monitoring to tell
me hey this is wrong.
This is horribly wrong.
I'll better roll back, and then we can see and figure out what it is.
Yeah, that's actually what we – I mean, I know there's so many monitoring tools out there,
and I don't want to obviously force one of our tools on you, but this is one thing we try to solve.
We basically identify change patterns. We basically identify, hey, you just
introduced the M plus one query problem in your top five transactions after this change.
Or you have this particular pattern when these two microservices call with each other,
if people are using that feature. And then we basically give you feedback on that. So we put
some artificial intelligence on top of the data that we see
and basically allow you to then take this
to make decisions.
Was it a good deployment or a bad deployment?
And based on that,
you can also do automatic rollbacks.
We are, I mentioned Capital One earlier.
I just had a call yesterday with their team
that is doing the Hygieia project.
It's an open source project
where they basically visualize the flow of code through the pipeline from dev all the way into production.
And they have integration points into monitoring tools.
So we are pulling in now Dynatrace data in these dashboards.
It's a phenomenal project that these guys are doing.
And, yeah, basically you're totally right.
This is what you want to do, right?
You want to leverage the monitoring data to tell you, is the stuff that you pushed through a good change or a bad change?
And the best case would be if the monitoring tool not only tells you in production but early on because obviously the earlier you can find the problem, the better it is.
Exactly.
The monitoring is a powerful tool to help you assess.
I mean it's your headlights basically if if you are
in in your software car driving around the internet um it you will just see where you are
and what is going on it's not some log file that you uh put in some archive for some future
debugging not anymore by far exactly and maybe we need to redefine term monitoring because i believe
when we talk about monitoring it's typically associated with production monitoring.
But what we talk about here is more kind of being a guardian, being a guardian along the way from the inception of CI, CD, in performance testing,
it's a guardian that tells you,
is the code changer you're pushing through
really a smart thing?
Because whatever you just changed
is going to cause 50% more bytes
being sent over the wire.
And because you deploy everything on Amazon,
that means there's a lot of stuff
they need to pay later on.
So things like this, this is stuff that we are this is
stuff that we are doing well i um i think this wasn't again i mean this this this feature-driven
development this experimentation as you said i believe it's just the way forward and as you also
correctly said there we're not only talking about e-commerce websites anymore but software is going
to touch a lot of new fields, especially in the industry,
where it's not going to be as easy
as Facebook is doing it now
in exploring new features
with the millions and millions of users.
We need to look at user experience
and acceptance testing
and talking with real users.
Hey, Finn, thank you so much for your time again
for the second recording.
I believe what you are doing
with sharing these stories,
I know you've shared it
at the German testing days.
I also know by the time
this recording is out,
you will also have been talking
at Quest, which is a conference
in Slovenia.
We're also talking about
some of the things.
Any final words from your side on feature delivery,
feature development, feedback loops
that you want to give the audience on their way?
I think it's just if to summarize it in one sentence,
I think we should always remember
that we want to build a high qualityquality product instead of a high-quality software.
So software is just the means to get there.
And just testing software and having some metrics about your software is not enough.
We need way more insights into how to assess the quality of our product and year and to view and monitor everything that is going on
through the process and through the development.
That's a good way of putting it.
Interesting, yeah.
Building a high-quality product and not necessarily high-quality software.
You're right, because if you build high-quality software
and nobody needs it because it's the wrong thing we built.
Exactly.
It's one thing you need in the end, but it's not the only thing.
And only this doesn't help you either.
Yeah.
Right.
Brian, any final words from you?
Yeah, just borrowing on what you were saying at the end,
talking about shifting left.
Before we started recording,
Finn and I were talking about how, you know,
we both come from that quality area background,
as you do, Andy, and you learn things on the dev side.
You learn things about operational.
But in shifting left, that's the perfect opportunity
to learn a lot more, to be aware of metrics
that are important not only on the dev level,
or not only in your performance or load or QA testing, but also learning what's important to
be looking at in your phase of your work. What's important in the dev side, what's important in
the operations side, so that you can be monitoring them and making sure that they all match up.
You know, what you see in dev monitoring translates to what you see in performance
and load testing.
And those also correlate to the same expected results once you're out in production.
And that right there kind of will automatically level you up from just being, you know, running
tests and do they pass or fail.
Understanding the impact of those tests, you know, the concept of increasing database queries.
And, all right, we see in the CI test that developers reported an increase of three database queries.
How does that correlate once we put it under load?
So that's all, you know, an important part of this whole cycle,
just one small part of it.
But I think for the people in that more the quality realm,
that's where we are.
All right, and with that, yes,
and the last thing I'd want to say is thank you, Finn, for sharing.
As Andy said too, without sharing all this information,
we all can't get better and improve the way we do things.
And if we have quality products written on top of or created by quality software, everyone will be happy and the world will be a bright, new, shiny place that everyone loves.
But seriously, on a serious note, it is always important to share these things and really appreciate you coming on the show today.
And people can find you on Twitter at Finn Lorber, F-I-N-N-L-O-R-B-E-R.
If anybody has any feedback requests or special requests, if you want us to play a song. We don't do that, though. But if you want to be a guest, if you have any topics you'd like to
talk about, please send us an email
to pureperformance
at dynatrace.com
or you can send a tweet to
hashtag pureperformance at
dynatrace or any one of us individually.
We'd be glad to hear from you.
With that, I've got nothing else to say.
So I'll say goodbye. Andy and Finn.
Brian and Andy, thanks so much for having me.
I was enjoying and I'm sure that we'll meet somewhere maybe in real life again.
And I'm really looking forward to that already.
Yeah, pretty sure too.
All right.
Be safe out there and build good software and well, good products.
Sorry.
We're trying all this.
Thank you. Bye-bye. Bye. built good software and well good products sorry we're trying our best thank you bye