PurePerformance - 077 How SambaSafety Successfully Migrated to the Cloud
Episode Date: January 7, 2019Migrating from your data center to the cloud is no easy task. In this episode, Patrick Kemble (@PatrickKemble), CTO at SambaSafety, shares their journey from lift & shift to AWS to re-architecting the...ir applications using Cloud Foundry. Along the way, of course, they discovered many important aspects of monitoring. https://twitter.com/patrickkemble?lang=enhttps://www.sambasafety.com/
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello, everybody, and welcome to Pure Performance. My name is Brian Wilson.
And as always, oh wait, Andy's not here with us again today.
This time he has not been accosted by their commissar.
But Andy is stuck. His planes got messed up. So he got some delays, and he is in the air right now.
And although there are some cool connections going on on airplanes, I don't think it's going to be good enough to do audio.
So I will be flying without my co-host Andy today.
Hats off to you, Andy.
And I would like to jump into introducing our guest then.
Our guest today is Pat Kimball.
Pat, how are you doing? Happy
New Year to you, and welcome to the show. Why don't you tell us a little bit about yourself?
Thanks, Brian. Glad to be here. I'm Pat Kimball. I'm the CTO of Samba Safety. I've been in
the industry, in the technology and software industry, for a little over 20 years, and
happy to join and kind of tell you about some of our
some of our stories right so i met you at the cloud foundry meetup here in denver and you told
a really cool story about the some of the you know basically the transformation that you all had to
go through so why don't we jump into that you had a quite a quite a big legacy code and it kind of
relates to a lot of what uh samba safety is doing with – I almost wanted to call it predictive insurance, but it's not quite that, I don't think.
But that's a great little term you guys are going to have to figure out how to fit in.
What was your old setup and your challenge that you all faced that you had to tackle?
Absolutely.
So we're at a high level.
We're in kind of risk management and safety for transportation industries.
And when I started with the company, you know, we the company's been around for over 20 years.
Very successful, has grown significantly, you know, used to be regional footprints and now has grown nationally.
The one thing that that was happening when I started
was a move to AWS.
And we were getting out of our own data centers,
putting all of our equipment and services
into the AWS infrastructure.
And with that, began to now understand
the kind of spread that happens
as you venture into the cloud and servers popping up everywhere
and gave us the ability to be more flexible and more elastic, but at the same time,
became more challenging to understand what was happening throughout the infrastructure.
Things were no longer, you know, it's in this one rack in this one slot it's now uh because it's ephemeral it's running
you know four different ways over here or or behind the load balancer on that side
now let me interrupt you there for a moment yes so this it doesn't sound like it was a straight
lift and shift kind of sounds like you tried to make some modif you know you're saying it might
be running four different ways so there was sort of some maybe changes to what you'd done when you put it into AWS, or was this a lift and shift?
It was more of a lift and shift.
There was some attempt to leverage some of the capabilities AWS was then providing, but for the most part, it was a lift and shift.
EC2 instances really weren't venturing into any of the service or function-based stuff at that point in time.
Okay. Sorry. Yeah. Go on with the story then.
No problem. But with that, because we could then expand and it wasn't necessarily,
we had to procure new hardware. It was easy to spin up another instance.
We had this kind of expansion of hosts and services running, but quickly realized that we had little to no
instrumentation in order to understand what was running, how it was running, was it even running?
We had a couple instances where things should have been running and we found a latent report
that didn't get out in time and we had to go kind of rerun that but found out that the service that
was supposed to be providing that was down for you know several days and so that was the point where
we realized we just had to have better visibility and better tooling in order to understand
when things were going sideways and being able to push more proactively, you know, in order to, to kind of be that, that grade
A service for our customers.
So, so what kind of monitor, I mean, you, you, you talk about just some real, um, course
level stuff monitoring, right?
Like services not up and running, maybe, uh, things not going.
What did you see, um, at that point of what your needs were versus what you were
missing? Obviously, I think AWS comes with some kind of, you know, some CloudWatch type of metrics,
but what was it that you really saw at that time that was missing?
And that specifically was the blind spot we had, right? We had the out-of-the-box AWS
kind of cloud watch,
hey, this server's not running.
And we understood those pieces,
but there was no detail into those hosts to say,
oh, you know, these three services
should be running on that machine,
or this application container
should be running on that machine.
We had no detail or visibility into that.
And that was the challenge is we knew
enough that, oh, the hosts are running, everything's green, but there's nothing in terms of
understanding, oh, the traffic stopped between this and this, or there should be a JVM process
running there and it's not. And was it easy to keep track of what was running on which host?
Was part of it not knowing if the service was up or down, but also not knowing what service was actually on the host?
Or were you able to at least mitigate knowing what was running where or what was supposed to be running where?
That's a great call out.
That was a huge gap in terms of we had several wiki pages.
We put this service on this host and this one on that
host. But as soon as things started to move around or as we got into some of the expansion activities
of adding more hardware, more compute to the ecosystem, that started to get out of date. So
we had no real visibility and we're lacking the kind of detail back into those that documentation so
even from that standpoint we were losing ground on what we understood out of the gate versus what
was the reality in the production environment right and just out of curiosity before you moved
to aws when you were all i guess hosting it probably within you you know, hosting it on your own. How did you handle all that at that point?
At that point, it was more like domain knowledge was the mechanism.
We understood the servers we had.
The increase in compute or additional hardware was known and more controlled,
so they could keep an inventory of what
was running on what machines the other thing too was you know 20 years of of kind of operation
and as they move those pieces in there the lifted shift also caused a problem because there was
things that people didn't even know about or understand but it got moved into it so we didn't
even have visibility into certain pieces because of those, you know, those deep, dark skeletons in the closet.
Right. We don't know what this thingy does here, but let's get that thingy up there and connect it back together and hope.
Okay.
Well, it's really interesting.
It's kind of a problem I never thought about. But obviously, yeah, when you are, you make a great point when you're, when you are running it on your own, um, you built it, it's, it's your, you have intimate knowledge about that,
that environment, every bit of it, right?
It's your own hardware.
You, someone, someone within your company probably put the screws in the rack, right?
So all that's known when you move that to the cloud, you're suddenly abstracted a layer
away.
And that's only the first step, obviously, right?
Cause you're, you're just doing
that lift and shift so it's mostly just even an infrastructure abstraction but that's enough
to make it difficult i can't imagine um what it's like when you when when people then go to
or let's break it down into microservices and and you know try to keep tabs of what's going on but
yes and the only other you know the only additional bit of information we had around the environment was logging.
And that logging was sporadic throughout the environment as well.
Depending upon the service, it logged differently.
There was no aggregation strategy in place.
Really nothing there to be able to say easily, let me just kind of pipe through or alert on a
stream of logs or something to that effect. Right, right. So you guys had to start thinking
of a full on monitoring strategy. So how did you, how'd you go about thinking of, you know,
what kind of monitoring strategy you wanted to go into? And what even inspired you to say, all right, this is what we have now.
Did you know what you could have or was it we don't even know where we need to be?
We need to start exploring.
What level of maturity would you say you had there in terms of knowing where you wanted to go?
So the team had a lightweight understanding of kind of the APM market or what we could use from a monitoring standpoint.
We already had CloudWatch instrumented from the host standpoint.
So that was the first kind of foray into let that we understand a deeper level of performance and what's available?
So we started down that road to try to instrument from a CloudWatch standpoint while I started some research in the APM market and be able to understand what the rollout really looks like from some
of the products in the marketplace.
You know, the typicals, the new relic, app dynamics, Dynatrace, things like that.
In addition to also considering kind of the application aggregation or the logging aggregation
and what we wanted to look at from there.
So, you know, vendors like a Loggly or PaperTrail or things like that, in addition to or as a supplement for the CloudWatch mechanisms and capabilities we currently had.
So there was quite a bit of evaluation and lots of moving parts going on at the time.
It was a little challenging to kind of keep the team focused on the immediate need.
The immediate need being what is running, right?
What might be falling over?
What might not be running at all? And what are we unaware of?
So that was kind of this urgency thing.
So we did an abbreviated kind of bake-off between all these capabilities.
And the primary factor for decision and work was around how quickly can we get something rolled out in order to begin to understand and get a good picture of the ecosystem.
Yeah, it's a fun challenge because as you start looking into capabilities, you then start getting, you know, those wide eyes of all the potential things you can do.
But obviously you still have those pressing needs and it's always a challenge to just get something in and running and make it so that your business stays up and your users or your reports or whatever needs to get done can actually get done.
Absolutely. And into technologies that were also, in some cases, you know, significantly aged and not
as easy to discover for some of the more modern technologies and techniques.
So that was another challenge we were kind of struggled with or saddled with was the gamut of technologies, you know, Java, C++, C Sharp,.NET, and some other capabilities.
But then also there was an age factor to some of those as well.
Yeah, I remember you saying during the presentation that you had quite a challenge in terms of the code that had to get migrated. And, uh, you know, some,
some was a little bit easier and some was that, you know, not quite mainframe, but mainframe S where it's been running for a long time. And if we touch it, what's going to happen to it? Kind of a
situation. And now did you, you probably, I mean, obviously you had to have some level of buy-in
from, um, you, you know, you buy-in from the people senior to you.
But how much support did you have to go through this exercise?
Because obviously, it's very difficult.
One of the lessons we've heard over and over again from people who've successfully made transformation,
not just to getting into the cloud, but then eventually getting into containers or Kubernetes or whatever else they go to,
the telltale sign of that success is getting buy-in from the top down.
What level of support did you have?
Did you have any challenges there?
Did you have any challenges with the team members, not just moving to the AWS part,
but in implementing and trusting the monitoring?
Yeah, so there was quite a bit of change.
You can imagine, you know, any of the APM tools, you bring something like that in, it's
completely different.
We had a lot of staff that had been doing the traditional, I go check the log every
day in this one particular service or host, you host. So being able to kind of bring a tool in and say, you don't have to do that
and kind of redefine those processes and how we
respond to those things. There was buy-in both in the
staff message aspect and then also you look
at those tools and obviously there's
a cost that comes with those
in addition to what we're already paying for AWS.
So financially getting a green light from the CFO
and the CEO around, okay, this is why it's important.
This is why we're going to have to spend some money
in order to understand these things
and stop impacting customers,
be more proactive in how we handle our problems so
that it's a better experience for our customers.
So it was a little bit of a roadshow piece there.
And again, all condensed into how quickly can we get this thing up so we can stop some
of these near-term fires.
If we talk about it from the operations and the development standpoint and their
curiosities around the different tools, keeping them focused on what we're trying to solve and
trying to avoid the shiny object with different tool sets and different capabilities in those
tool sets, the piece there was really to ensure they were focused on,
we need to understand, you know, low level services that are running and be able to have
a tool that we could hone in on and drill down into code level performance and bottlenecks.
We were seeing, you know, dreadlocking or high consumption of memory and things like that and understand
specifically in order to feed that back into the SDLC and be able to fix these problems
sooner rather than later. That was another piece that just wasn't present was a
significant amount of understanding from just from a profile aspect of a running service
that there was no data within the company for stuff like that as well so you all started some of that shift left practice where you started capturing that data and looking at it earlier
in the cycle to prevent it from getting out to production it sounds like that's what you're just
saying yeah absolutely absolutely and and so that you can imagine, that's a different mindset for a company that was very traditional in its software changes. And that's probably the biggest challenge is
tons of tools to solve these types of problems. The changes from the human standpoint and the
processes around that always take a little bit longer than even some of the technologies do to
roll out. Right. Right. Now, obviously, you know, we were talking and one of the tools you're using, obviously, is Dynatrace.
But are you using that alone, or do you have—there are suites of tools.
Are you using, for different levels, different kind of aspects,
what is the tool set that you are using for the monitoring and maybe even your pipeline,
if you're using Jenkins or anything like that?
Pipeline tools versus different monitors. I'm, I'm, I'm curious what your,
your full suite is there.
Yep. So Dynatrace was kind of the, that was the low hanging fruit.
We put that in there to just be able to get understanding of interdependencies
and execution in the production environment.
And then we've been evolving that since then.
And we've also been bringing in
Cloud Foundry to run our production environment. So we've been able to leverage some of the tools
in PCF metrics and some of the aggregation, logging aggregation capabilities there that
you get out of the box in order to kind of help synthesize that down and be able to, from a single pane of glass, go to and understand what might be happening, you know, given an event or things like that.
As we talk, as we kind of try to push that even further left, we just started leveraging tools like a Sentry IO to instrument the applications.
And as that gets promoted from environments, we're now able to find exceptions or problems in the UI
in these environments and through our automated tests
so that I can identify what commit actually did this problem start to show up in, who was working on that commit.
I can have those things automatically assigned to that developer in order to then resolve and repush back through before that even makes it out to the wild and the production environment.
Right, right. So tools like that, and we're looking at right now, we've been kind of kicking tires on several CICD solutions.
Probably, you know, kind of down in that CircleCI, Travis slash concourse realm is where we're at right now.
We've got a few different teams piloting each of those. How do we bake in some of those quality checks and some of these performance and monitoring checks
into that pipeline then? And that was one thing I appreciated in the conversation. You had a talk
after mine. And how do we start to bake in those pieces and get that feedback into the pipeline and continue to bring that information closer
to the source of the creation itself in the engineers.
Right.
And I just want to touch on that briefly for anybody listening.
If you haven't already seen or heard Andy's Unbreakable Pipeline, he's done a few of them.
He's done one with AWS Code Pipeline.
He did one with Jenkins.
One of my colleagues took and ran one with VSTS.
And then my other colleague, Mike,
he did one for Concourse, as you mentioned.
So the general idea of where we're going with there,
and we have webinars up online, we have GitHub repos
if you just search for Dynatrace,
one of those things that should pop up.
But what I wanted to address there,
and it sounds like where you might eventually kind of go,
and we were talking about that at the meeting,
was the idea of using, you know, all these tools have these APIs
and, you know, great tools have great APIs. It's just, it's just like kind of a fact of
the tools these days. Right. And you can take the data. Um, so the idea with the pipeline is,
well, you have your monitoring tool, like a Dynatrace or something in your pipeline,
collecting all this data as you're pushing your code through. So that, and you're also
pushing the metadata about that build. So like you were talking about who pushed it, what team
was responsible, that can get pushed to the monitoring tool as well. So that if something
does go wrong, not only does the monitoring tool have the metadata, but it also has the performance
data, which it could then, you can then even automate the analysis of, so you no longer have to manually
do that comparison in order to promote it to the next phase of the pipeline.
And then obviously, some of our listeners might have seen, and Pat, you probably haven't
seen this, but we have this UFO, or as they say in Austria, they call it UFO. But it's a 3D printable UFO with lights on your pipeline.
And they put it by the coffee machine or by the beer fridge, wherever, you know, so that when it turns red, the developers will see, uh-oh, it turned red.
Is that me?
So it stops.
That's when it breaks the pipeline.
But there's all these things that you can bake in, and it sounds like that's where you guys are,
not necessarily saying that exact model,
but you're well on your way to building a smart pipeline, basically.
Absolutely.
That sounds really fun.
Now, what did you end up doing?
So obviously, if you're moving to Cloud Foundry then,
that's kind of, you're probably doing some re-architecturing in there, I would imagine, right? You're not just going to lift and shift to Cloud Foundry then, that's kind of, you're probably doing some re-architecturing in there, I would
imagine, right?
You're not just going to lift and shift to Cloud Foundry.
Significant amount of re-architecting the solutions and, you know, taking that 20-year-old
ecosystem and trying to break that down into kind of the subcomponents and subsystems as we re-architect that in a typical strangler pattern,
as we kind of take chunks out of that system and sliver those pieces out.
And when we're re-implementing these pieces,
taking that opportunity to also make sure that they're properly instrumented
and we understand how things are moving. And to that smart pipeline,
you know, we've used some of the tools. The tools we have right now have been very helpful in
bringing to attention where we may have used a bad pattern or something like that.
And the team is being very proactive in catching that.
But instead of just, you know, throwing it over the wall and it continues and we'll fix it later, you know, coming up with documentation, better standards, better practices, how that's raised the quality level just by giving that feedback sooner in the cycle and giving the engineers the ability to kind of slow down, back up, rethink that at a broader piece, and then re-implement those things and share that with the rest of the organization. Right, right.
So it sounds like you have this monolithic type code that you were trying to break into some multiple services.
Is that correct?
That's correct.
Yeah, we've got two significant monoliths that have been kind of smashed together to kind of integrate.
And we've decided to take the user presentation layer of those two monoliths,
and we spent the summer rewriting that and putting that back out.
And that's what's been moved to Cloud Foundry. And we're now in the middle of migrating customers into that environment.
The next big tackle for 2019 starts to look at the second half of that monolith stack.
And how do we, which is actually this integration layer, we connect to all the DMVs across the country.
And you can only imagine the complexity of some of those things.
There's actual green screen integrations that we have to do in some states.
So it gets a little complicated, and we're trying to figure out how we start to slice some of those pieces up. How do we modernize that, leveraging some of the tools that we have available from a cloud foundry and an AWS standpoint?
And again, obviously those things very asynchronous in nature. How do we make sure
that when something goes bump in the night, that it's, it's recoverable and it's intelligent enough
to kind of self heal itself. Now I'm curious with, um, and I apologize for, for kind of making this
a little dinotrace, but it's not particularly, or it shouldn't be at least, when you were looking to break those pieces out
and when you're looking forward to seeing how you're going to break out before,
were you doing any modeling of that within Dynatrace
by taking the code, running through it,
and virtually creating custom services to see how they were running?
Or were your developers actually breaking apart the code
and then putting it through monitoring to see how it performs i'm just curious if you've leveraged that capability
so we we haven't done that specifically that the way you kind of described it
what dynatrace did allow us to do was at least understand the the deep dark corners of some of these hosts and what was running.
So from the inventory standpoint, I mean, there was, that was bar none.
The most valuable piece we got out of that was understanding the interconnectedness,
the dependencies from a traffic standpoint and utilization, as well as on, you know,
getting that detail back on use cases and use rates, et cetera.
So that as we design something new, that was going to be something we had in mind, knowing
what the use was previously.
Okay.
Were there any moments when you were looking at that and you said, what the heck is going
on with that piece there?
Several, several, you know, things where you kind of find, you find some service running
and it's named, you know, it's named something that's non, not really logical.
Somebody, you know, named it a cute name.
Yeah, like Bilbo.
Right.
Yeah.
And all of a sudden that shows up and like, what's that?
And then you have, here's somebody pipe up like, oh, I remember that. And it does, you know, this, this and this. And so you kind of uncover these deep, dark things. There were a few pieces that we found were just not even relevant anymore. You know, they were just there capturing some data, but that data wasn't even used. The product had been retired. It just that this, you know,
few hosts or few services didn't get retired as well.
Yeah. My, my favorite was always discovering the,
the Quaker Arena folder on a production server.
Right.
Busted.
Yeah.
So you got, you all went, moved over.
What was the, what was it like, um, in terms of the, the difficulty or the challenges?
I wonder if you can compare, contrast maybe the challenges of moving the lift and shift
to AWS in the first place.
And then the taking some of those components and cloud, you know, putting them into Cloud Foundry and getting to that next step.
Was it easier to do that second step after having gone through the first step?
How would you describe that whole bit there?
I would say the second step was probably easier.
And that was easier because we kind of were able to take a more greenfield
approach to it. And as we did that, we knew upfront, you know,
we wanted it to be automated.
We wanted the testing to be automated and,
and when it was going to be highly instrumented in terms of both our build
processes,
as well as the user experience to understand what features and functions were being utilized.
So I would say that gave us a great opportunity to inject the tools that make it helpful from a data standpoint to make decisions or to correct problems before they happen. The bigger challenge we had on the refactor and re-architecture and that standpoint
was making sure that as a company, we knew exactly what that customer really wanted.
This is an opportunity to kind of rethink and re-engage with the customer and either validate
or dismiss things that had been made, things that were assumptions over the past 20 some years so
so a good opportunity to kind of clean shop but then also re-engage with the customers themselves
and help them direct where we were going to take the platform you know in the next 20 years okay
and i wanted to find out um one the one last lesson I wanted to find out from what you might have learned. And now I'm not fishing for anything specific to us, but in the changeover and everything that you went through in setting up a new approach to monitoring, what do you feel are that you're currently doing that you might not have been doing in the past? Um, like if you were going to say to somebody, uh, if you're not doing this in terms of monitoring, um, you should, you know, have your
head examined or something, you know, what, what's some of the biggest lessons you've learned from
that side of things? Um, that's a good question. There's a ton of stuff we've gained, you know, just just to understand how are we even right sized in terms of the when we did the lifted shift, we just said, oh, this is this type of machine.
So we'll make it this type of AWS instance.
Financially, that was a huge insight into the utilization and right-sizing some of that equipment.
We were able to actually save ourselves some money as a result of that,
just by being able to understand and see those performance metrics.
Without something like VIT, without a tool, an APM of some sort,
you're not going to get that insight into the real
runtimes of those machines and those services consuming.
So I feel like that's probably the one thing, like if you don't get anything out of it,
at least understand what you need versus what you have from that standpoint.
You know, obviously the architecture for me personally, by the time we had put in Dynatrace,
it was about six months after I started and I was still trying to get my head around all the systems running.
So for me, from an architecture standpoint, it was gold because I now had visibility and
understanding from system to system.
And when somebody would talk about something, I understood how that fit into the puzzle of the as is, which helped inform the 2B we were talking through.
Right, right. That's a really great point.
So the funny thing is normally Andy's nickname is the Summer Raider because he's not only the Austrian, which is the home of Arnold Schwarzenegger, but he's phenomenal at providing a summary of everything that was just discussed.
And I always sit back in awe.
And Andy is not here today.
So I'm going to have to play the role of the Summaryator and see if I can pull this off.
So bear with me, everybody.
Come on.
Do it.
Do it now.
So first of all, I want to thank you, Pat, for being on.
I loved hearing you guys or you loved hearing your story at the Cloud Foundry meetup.
And thanks for bringing part of that over here and
discussing that with, I was going to say us, but it's really just me now, uh, with our listeners,
I should say, um, you, you all went through what I think a lot of people probably are in process or
hopefully on the other side of, but I'm sure there's a lot of people out there that still have
systems that have not been touched for these reasons. Right. Um, and I think part of what
we can learn from a lot of these stories that we hear is number one, don't tackle a full
transformation at once. You know, what you did was really helpful. You did a lift and shift,
which gave you a lot of insight into what was running because as soon as you shifted that into
the cloud, you realized you had no idea where anything was running anymore which forced you all into
rediscovering your entire application architecture obviously you then use some monitoring tools which
is the next big important piece in order to understand what that is because you're not just
going to be able to know and and if you try to update it in a wiki or something else,
things are going to be changing.
Someone's going to say, hey, let's change the size.
As you did, you smart-sized your application.
So now instead of it being on a 4 CPU, 16 gig AWS box,
you drop it down to 2 and 8 or something, whatever it might be,
keeping track of that stuff manually is going to be very, very difficult.
So that's where your monitoring comes into play.
And then once you get set up, once you all got set up and running with that,
you took and tackled the,
or now let's start trying to break these into services and was a bit easier
from there. The other thing I would say is, you know,
from a monitoring point of view, you brought up two really,
really great points. You know,
number one is trying to right size things, especially when
you're moving into the cloud. One of the biggest challenges with the cloud is explosive costs.
Developers oftentimes will be like, well, it's not my money. It's just the cloud. And so they
spin things up, they write code, however they want to write. And Andy and I talk about this a lot on
the show. You can write anything and put it out there and it'll run. But how much resources
are you paying suddenly in the cloud? Or not even you, you don't even have to, you know, developers,
no one even has to ask for hardware anymore. They just order something and someone else is writing
a check, you know, the CFO sitting there going, what's happening to our operational expenses?
So number, you know, great first thing to do, as you mentioned, right sizing your application,
which means again, knowing, you know, having great monitoring information about that.
But number two is when, especially when you're moving these monolithic applications into
a cloud or trying to change something, understanding the interconnectivity of all of it, right?
So until you understand how these pieces all fit together, you're not going to be able
to figure out how to break them all apart and make them more efficient.
And that all comes from, you know, some good monitoring as well. So,
you know, I guess kind of summary is know your environment, know your environment and know your
environment. And when you move it, relearn it because it's going to change. I don't know. That's
that's my attempt at a summary. That's amazing. I think that was a pretty good summary. As you were talking through
it, I would say the one last piece to add into that is what those tools give from a transparency
standpoint to the broader organization as well. Right. It's not just about the monitoring team,
it's the entire team. And that's where that whole DevOps play comes in and including the
business teams, right? People always, well, I think people nowadays are a lot more aware of knowing or, you know,
DevOps kind of encompasses more than just dev and ops.
There's business and all that.
But that obviously helps the business then guide what they're going to do in the future
if they know what they are doing now.
Again, I really appreciate you coming on.
To all of our listeners,
I just bumped into Pat at a meetup.
So for all of our listeners,
if you have a story you want to get on the show,
you want to share some of the stuff you've been through,
reach out to us.
You can contact us at pure underscore DT,
or you can send an email at pureperformance
at dynatrace.com.
Pat, do you do Twitter that you want to get followers or, um, what's your website?
I don't know if you're hiring or anything.
Definitely hiring. Uh, we are Samba safety.com. Okay. Uh,
and Twitter is at Patrick Kimball. All right. Excellent. Well, Patrick,
thank you. We're Pat Patrick, um, all the above. Thank you so much for being on.
Um, since we're recording this before the new year, happy new year, happy holidays, all that to you.
And best of luck with your 2019 challenges you'll be dealing with.
Likewise.
Enjoy the holidays and the new year.
All right.
Thank you.
Thanks, Aubrey.
We'll see you. Thanks, Lachlan. We'll see you.