PurePerformance - Demystifying DevTestSecOps: Automating Security into your Culture with Adam Auerbach
Episode Date: March 30, 20203 years ago, Adam Auerbach explained how he helped Capital One to automate performance into the DevOps Delivery Pipeline. In 2020, where IT Security is a hot trending topic, Adam works for EPAM and is... back advocating for the same shift-left he as advocated for when it comes to functional or performance testing. But now – its about baking Security into your practices & culture. And he has a cool word for it: DevTestSecOps!Listen in and learn which types of security checks can be fully automated in the different stages of the delivery pipeline. Also learn how to prioritize your vulnerabilities as you most likely end up with a lot of noise in the beginning. Adam also highlightes the following open source tools that will help in that transformation: getcarrier.io and reportportal.io.https://www.linkedin.com/in/adamauerbach/https://getcarrier.io/#abouthttps://reportportal.io/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and with me as always is my wonderful co-host Andy Grabner.
Good morning Andy or good afternoon.
Yeah, good afternoon here at the time of the recording.
It's already dark outside because it's still winter and the sun is just uh i don't know escaping at least
this part of the world too soon in the day yeah well that happens yeah it does every day every
day the sun goes down and then every day the sun comes up the next day the sun will come out all
right uh anyway i'm sure everybody wants to hear me sing. How have you been, Andy?
Not too bad.
I just spent, at the time of the recording,
I just spent a week in Greece,
thanks to our friends from Neotis, who organized the PEC event,
the Performance Advisory Council.
I want to give a big shout-out to Henrik and Stefan,
who organized the event.
It was great.
Great event and great location.
Yes, and I think we're going to be having
a guest from there soon,
so that'll be wonderful as well.
Always a lot of great things that come out
of those performance advisory councils
that they set up.
So I am really excited to hear from you
at some point,
some new things we learned from that.
And speaking of new,
we're going to go to the old but good old we're gonna reach
way back today and thank you for reminding me and looking this up i think it was three years ago
what episode was it andy it was episode number 12 episode number 12 we had a guest
and a lot of time has passed and a lot of new things have been worked on by our guest,
and now he's returning.
Would you like to introduce our return guest, Andy?
Yeah, of course.
So back in three years ago,
episode 12 was called Automating Performance
into the Capital One Delivery Pipeline,
and it was Adam, and the way I pronounce his last name
with my German background is Auerbach.
But I guess we want to hear from Adam itself how he's pronouncing his last name.
And also, Adam, I want to know, a lot of things have changed over the last three years.
You moved on from what you did at Capital One back in the days.
And I think you also changed a little bit of scope from CICD pipelines DevOps into DevSecOps and DevSecTestOps,
which is the topic we're going to talk about today. So Adam, welcome back to the show.
And tell us more about what people should know about you and then we dive into the topic.
Yeah, thanks, Andy. Thanks, Brian, for having me back. I can't believe that was three years ago.
I remember sitting and having conversations with you. So, yeah, so Auerbach.
Every time I go into Germany, into Frankfurt Airport, the person at customs says, Speckensiehdeutsch?
And I say, no, your name is Auerbach.
How could you not speak German?
Yeah, so, you know, my experience at Capital One was fantastic.
It seems like for me that was like higher level education, you know, where they were with DevOps and moving to the cloud and getting to continuous delivery. parlay that into consulting. So I work for a company called EPAM Systems. It's a US-based
company, but most of our resources are offshore in Eastern Europe. So Ukraine, Belarus, Russia.
And then we have, of course, resources across the EU and India, North America. It's about 36,000 employees, over $2 billion in revenue.
And so as part of EPAM, I am the co-head of what we call our cloud and dev test sec ops group.
So when I joined two years ago, testing was a separate practice. DevOps was a separate practice. There wasn't really security at all.
And cloud was still forming.
And so myself and the gentleman who was leading the DevOps practice, we created this big conglomerate.
It's 9,000 people at EPAM.
And basically what we said is that, you know, the market has changed.
People are now trying to do what Capital One has done.
They're trying to move to the cloud.
They're embracing DevOps and Agile, and they're struggling.
Their testers need to change the way that they're working.
Developers need to architect and code differently.
As you want to move to the cloud, you need to have all of these pieces in place. So we built this practice as a way to help transform the resources at EPAM, but also build out a suite of open source accelerators and have a go-to-market strategy that best suits the evolving needs of our customers.
So I've been here for two years now,
and we've had many engagements,
like Capital One,
where we're coming in
and helping build out the business case
and baseline the KPIs
around why someone is transforming.
And then we have a whole engagement model
to help drive transformation,
whether it's just CICD or testing or if it's everything because, you know, you're moving to the cloud or you want to modernize.
So it's been really exciting.
Last year, I traveled 42 times to nine different countries.
Wow.
Yeah, I mean, I've been meeting a lot of great people and just talking to a lot of diverse companies and a lot of the same struggles are there for everybody.
But it's been a really great experience so far.
Hey, before we dive into the details of DevTest SecOps and what people can learn from you and what you guys are doing and telling your clients.
Let me go back to what you just said earlier.
You said you typically go in
and then you try to find out
what's the business case.
And I would be interested in
what are the typical business cases
or the KPIs that you try to optimize
and make better with your initiatives?
Because I also get, I talk to a lot of people,
and they always say, well, we know that automation is important.
We know we have to invest here, but what's the real benefit in the end?
And kind of, I know this is a tough question maybe,
but what are the things where you sell to the business
that this is why we need to invest a lot of time in
automation where we need to invest in education where we need to invest into the cloud what are
the typical business drivers and business motivations that allows a company and say okay
we're really putting down money because now we understand the business benefit yeah so, you know, what surprises me is that people, most people don't understand how well or not well IT is performing.
And so that's what we do. We spend maybe six to eight weeks looking at applications.
And what we do is we document the entire value stream.
So we do value stream analysis and we look at everything from how the application is architected,
the branching patterns, looking at what's automated and manual from a CI process,
and then documenting all of the times. And so from a metric perspective, the first one is lead times.
So how long does it take? And a lot of the metrics that I'm going to quote from are referenced from
DORA, DevOps Research and Assessment, and the State of DevOps Reports. But lead time is usually
the first one. How long does it take for a commit to work its way through all of your non-profit environments and get to production?
So that's the first metric that we look at.
And that gives us a baseline.
And then from that, we can pinpoint specific inefficiencies within the process.
Let's say they have monthly releases.
How much of that monthly release is now spent on testing?
How much of it is spent on manual activities? How much is spent on rework? And so we can start to create a to get new features out the door faster,
maybe twice as fast, maybe going from quarterly to monthly or monthly to biweekly or to daily.
That's one metric.
The other metric is productivity on the teams.
So we can look at the backlog of the team and say,
how much time and effort are they spending on fixing defects,
production issues, just the churn amongst themselves, be able to look at some of the history within Git to see what they're doing.
And we can come up with some metrics around developer productivity so that we can then say, hey, not only can we get you to market faster, we can put more into your releases because of the opportunities with waste.
And then production quality is the other one.
This represents a real cost that the teams are facing.
So if they have more than 5% of defects leaking to production, that's a real cost. And so that's the third one,
where the true cost savings by being able to implement true quality gates,
be able to maybe change some of their processes and how they work. To us, this is how we help
not only educate them on what's bad and why it's bad, but then the opportunity for the business
that if they make this investment,
here's what they should expect on the return.
So what's always interesting for me is,
I mean, it's great that we have companies like yours
that you're working for right now,
EPAM and others that are coming in
and helping organizations figuring out what's going wrong right now, EPAM and others that are coming in and helping organizations,
figuring out what's going wrong right now
and how to improve things.
What still always puzzles me is
how can something actually end up
in a state like this
where people don't even know what's going on
and where they have to ask for outside help
to sit down and do something,
quote unquote,
basic as a value stream analyst.
I know it's not that easy, but still, it's not a concept that is completely new.
But I guess it just sometimes takes external folks to come in and bring an outside perspective
and actually then get people on the table and say, let's sit down, let's look at the
things and let's figure out what we can improve. But again, it still puzzles me sometimes that organizations get to the stage
where one side does not at all know
what happens with the artifact once they push it over
and where things are manual and where things are broken.
Yeah, you know, they're just...
It's a great point.
It's not how people are rewarded or recognized.
It's not something that is asked to be tracked.
And sometimes companies just, you know, they've been through a lot in their history.
And so they lose the perspective of like taking a step back and looking at what's going on.
They're just kind of happy with the progress that they've made.
And then also like they just get kind of stuck in like this is how it's always been.
Or, you know, these problems we've always faced.
And so like it is what it is and we can't do anything about it.
And, you know, we don't like we do sell the assessment like you anything about it. And, uh, you know, um, you know, we, we don't like, we do sell the
assessment, like you can buy it. Uh, but the goal is not to do an assessment for the sake of
assessment. The goal is to, you know, drive improvements, but it's just, uh, I could
probably go on probably a whole nother podcast around motivations around why people don't see
it for themselves
i think it's as easy as looking at like if you have kids you know that you tell your kids
something is cool or something uh they should do you know try this vegetable or check this out and
they all they do is resist and just take someone from the outside to do it i was trying to teach
my daughter how to ride her bike a few years ago, and it was just fighting me the whole time. And then her sister's caretaker was like, oh, I'll take you to the park
and show you. And she looked up to her as an outsider, and at the end of the day, she was
riding her bike. I think a lot of it comes into just human nature of those close to you, you don't
quite listen to or take their value of what they're suggesting
to heart, as opposed to when it's somebody from the outside, especially if you're paying somebody
from the outside as a paid consultant to come in and look at this. I think it's just a lot of
human nature involved in that aspect. Yeah. And I think a lot of these companies,
they go to conferences, they read books, there's wonderful podcasts like this.
And so they try to take some of that information and apply it, but they don't, you know, they don't have all of the context and experience.
And so, you know, you see that with Agile right now where many people would say, yes, we're Agile.
We have teams, we have backlogs.
We don't have chairs in our meeting rooms.
We do stand-ups.
But you know, as an Agile expert,
if you actually go in and look at what they're doing,
many of them are still waterfall, right?
They're not delivering working code to production every sprint.
They still require some type of release readiness.
There are activities that are happening outside the team.
And they just don't have that experience to understand that, yes, some of the things you're doing are good, but you've missed the boat in other areas.
And you're not getting the value for why you did this in the first place. And the last thing I'll say about it is like,
as a consultant, you know, I have to justify my, the cost. I have to justify the experience,
you know, why you're going with us. And so for me, I have to have this point of view
to do, to do a discovery, to be able to baseline and be able to educate and then deliver. But when I was at Capital One,
to some extent, I could just go. If I was able to at least have a conversation and get someone
bought into what I wanted to do, I could just do it. I never really had to make too much of
a business case for something. I never had to go back to metrics and argue my case.
And I think that's the other piece here is just like company cultures
don't require business cases and justification and metrics around value and you know so um
you know there's no there's no reason to look at it differently it's a great point um
so coming to the well thanks for that I really wanted to
people kind of to understand
you know why
you know what are the business drivers
or how can
how can somebody
that wants to change something
actually approach the change
and sometimes it just means
you know reach out to somebody external
like you
in your profession
and then bring them in
and then figure out
through a value stream analysis
where things are not perfectly working
and then optimize from there.
Now, coming back to the main topic,
DevTestSecOps.
First of all, I'm sure you hear this a lot,
but why do we artificially create
a lot of new words around DevOps
and then put more stuff into the middle?
Obviously, we do the same thing, right?
We come up with a lot of words that kind of are sexy to read
and then hopefully get the attention of people.
Dev test tech ops.
Can you give me a little background on why this is different to DevOps?
Why is it different to dev test ops or test perf ops?
What's the special thing about dev test tech ops?
Yeah, so what I find is that people still don't understand DevOps.
You still get people who say, who ask, like, I actually, before this,
before we did this recording, I had a salesperson reach out to me and say, hey, I have a client.
They're interested in DevOps. And I asked him, what does the client mean by DevOps? And then
he said, I don't know. And then he went back in his emails and his, the client's definition of DevOps is operational support for the infrastructure and applications.
There are people who think about DevOps as just, you know, automation for CI.
And then there are those that really understand about DevOps being a philosophy around flow and quality.
And so that's the first challenge,
is while DevOps is a wonderful movement
and a wonderful word,
people, it's just so vague right now.
There are DevOps engineers and DevOps organizations
and DevOps quote-unquote tools.
And so because DevOps itself has become so overused,
it's lost some of its meaning and sense.
So we came up with DevTestSecOps a little bit tongue-in-cheek.
Number one, every time we say the name,
everyone has the same reaction where they smile and laugh.
And their eyes roll, right?
Yeah, it's like, yeah, we could have came up with a better name.
But at the end of the day, to really do DevOps, you have to take all of these things into account.
Now, granted, CI and automation, testing, performance security, like it's not, like that's probably not all of it, right? It
also includes infrastructure and architecture and branching. There's other aspects that are
included, but these are what the areas where we see like the biggest opportunity.
People have started to automate their CI. They are understanding that
they need to shift responsibilities to the agile teams and the developers to do more things that
we have to have. The team has to have holistic ownership of being able to ship working code
into production early and often. But we still see that performance and security are left out.
We still see that while they're focused on CI and the cloud,
they haven't really addressed testing yet,
or they're minimizing the impact of testing.
And so that's, you know, right now,
DevTest SecOps is there because, like,
these are the glaring opportunities
that they have not addressed yet?
Well, I would say especially security, right?
I mean, we had too many incidents, let's call it that way,
too many incidents over the last years that were shining a bigger spotlight
on security.
And then the question is, how can we address security
not only when it's kind of too late
and then just mitigate the impact
by trying to fix the holes fast in production
and making sure that
not too much information has leaked
to how can we prevent this?
And how can we bake this
into your delivery process?
How can we bake this
into your mindset of your organization?
Security has to be, at least I would assume,
a non-functional requirement just as performance should be
and scalability should be.
Yeah, I mean, it's interesting.
There's, you know, back in the day when computers were getting faster
and the internet came around,
performance was, was the big thing, right? That, you know,
websites were crashing, they couldn't handle demand.
And so now we had to start having performance tests.
And then the reaction of the corporations was let's go create a performance testing organization. They're going to bring in the tools.
They're going to do all this work.
And what ends up happening is you get this big bottleneck,
this bureaucracy that people don't want to deal with.
And so there's, you know,
performance testing happens really late at the game.
And if they find something really big,
most likely they're not going to address it
until a future release.
And it still has to be prioritized. That has since changed as tools have changed. And, you know, now we have many more devices and interactions and performance really has become a first-class
citizen. Because if my app or my phone takes anything more than a second to load, I'm going to freak out and move on to the
next application. So, you know, I see like the same thing has happened in security that happened
in performance where we created the security organizations. We then said everything has to
go through security testing, or at least at some point during the release, we have to certify it. Then there's this process for sending your
binaries to do dynamic scans or pen tests, et cetera. And it becomes like a burden on the team.
And ultimately they're just trying to do the bare minimum to check the box. And then ultimately
they'll have an issue if they do it at all. And so this is why dev test SecOps or DevSecOps is so important
because it really has to evolve like functional testing has evolved,
like performance testing has evolved.
It has to be in the pipeline.
It has to be a gate.
It has to be part of many gates.
And we have to get more integrated into the way developers are working so that it's not as painful.
It's easy to understand why something is throwing an error and be able to resolve it quickly. So, you know, that they, the short answer is like, as people are setting up
their pipelines, security has to be part of that equation. And so we need to be able to look at,
okay, we're doing a pull request. Well, like what security tests can we do right now? Can we do the
static security scans? Can we do code smells? How do we bake this into a gate so that if there is an issue,
the code isn't merged and it's not deployed to the next environment?
How do we do dynamic security testing as part of our checkout in QA?
How do we ensure our threat modeling is up to date?
All of these things have to be worked into our CI process,
into our agile methodology,
and we need to get out of these fiefdoms
of security organizations
and really get more integrated into engineering.
And in terms of that, I was curious with, you know,
when you're running security scans and you're looking at all the different
threat assessments in there, is there also a role,
and maybe this is in the mindset of breaking down those barriers between the
silos of understanding and prioritizing the threat assessments.
So if you have maybe a service that's deep in the woods, has no egress or ingress, but
it has a security vulnerability, but really no way for anyone on the outside to access
it, knowing where that, you know, how does a team know where that sits and to say,
yes, this is a security threat, but it's not exposed and it's much lower priority
than something that's maybe exposed to the outside world. Is that part of that cycle at this point?
Or is that something even that's wise to be considering if you understand what I'm getting at there?
Yeah, I mean, I think there,
like I look at that as two aspects.
Number one, like the team had,
like the same thing within functional testing, right?
The testers have to understand the application architecture.
They have to be able to understand
what the APIs are doing,
what different methods are doing in order to be able to do more white box testing and more risk-based testing.
And the same thing goes here.
Security tests throw a lot of errors.
There is going to be a lot of noise.
And so the team needs to be empowered to be able to look through that. And yes, you have to understand your application's architecture so that you can prioritize those findings better.
So that way you know what you have to react to and what you don't have to react to.
This part goes to like in the early days, you would have somebody doing your dynamic security testing, and then they're going to come up with a report.
And then you have to go through the report and then justify what your users are actually, what areas of your application are being most used from your end user perspective, and then being
able to prioritize your results based on that.
I mean, there's a couple of different methods there to make sure that you're able to react
accordingly to the security threats that are most at risk.
Yeah, I see a lot of parallels between security testing and performance testing,
you know, coming down to understanding what you're testing for, understanding
what your system is, understanding what it's designed to do. Like a classic,
you know, classic example in performance testing is, you know, testing a search function
using unique searches 100% of the time.
So nothing is cached, but it's not real world scenario of testing because people are going to be searching common terms.
That's a very basic example.
But I imagine in that security realm, it's very similar. looking for and understanding that architecture of your systems to know when you have a flag raised
up, where does that sit in the architecture and what that level of security is or what that level
of threat is in the real world? Yeah, I mean, you definitely, absolutely. I mean, I think
with everything that we're talking about, you know, communication is key,
you know, being able to talk through with architecture and engineering and security to
understand exactly that model, that usage,
like it's so important that it's not just, you know,
some group blindly going and trying to find vulnerabilities without really
having context.
I mean, I think it's an underlying theme in many of the things that we're doing nowadays,
that we have to be talking, we have to be aligned, we have to understand why are we
doing something so that way we can put the best plan forward and then we all can be accountable
for its success.
And when you talk about understanding DevOps, I think that's part of the core of DevOps
is all the communication and collaboration
between all the teams so that there's a unified effort.
At least for me, that is, you know.
Yeah, no, I totally agree.
And, you know, it goes into how the more educated
the team is around things that they're not used to,
how the network topology is architected,
how the cloud VPCs are set up,
why we're doing things from a security perspective or the data transports,
the more educated they are on the systems that they weren't exposed to before,
the better decisions that they can make as they're architecting new features for the platform
versus them doing something
and then later on someone says,
no, you can't do that,
or worst case, it goes all the way through production
and it creates some type of risk and issue.
Hey, Adam, I know you said,
you started earlier explaining a little bit
what type of security checks can be done
in which, let's say
stages of your pipeline but um it would be interesting for me and also i assume for the
listeners that try to figure out okay uh if they don't have any security tests right now and checks
in the pipeline what are the things kind of the must do's in each individual stage so you mentioned
earlier what can the developer for, do on their local workstation
before even committing code?
Or what checks can be done and what tools can be used when they do a pull request?
What can be done in, I don't know, in a dev environment, in an integration environment?
Is there any guidelines, any kind of like, it makes sense to do these types of
security checks in this particular stage in order for you to capture enough, but yet not
kind of wasting time on trying to find things that don't make sense in this environment?
Yeah. I mean, I think, you know, one of the, um, the answer is yes. Um, you know,
obviously you should have, um, you know, a perspective of like, what's the, the destination
state for your pipeline. Um, so that way you, you can have a repeatable pattern across. But definitely like
in the very beginning, there should be some type of threat modeling that you're looking at across
your entire architecture that's really going to drive your whole approach. Ideally before commit,
you're doing some type of vulnerability analysis. So SAS or maybe like a open source scan when you have your your build
again like static security test and the open source test dynamic security testing
i would do a little later in the process after your pull request.
Maybe after I'm doing some of my basic functional tests.
And just for clarification,
dynamic security testing means you are doing pen testing.
You're trying to figure out if you have any SQL injection issues.
For those people that are not that familiar
with security testing,
what's the difference between static security testing
and dynamic security testing?
I know this is a basic question, but it's still valuable for our listeners.
Yeah, and I'm not a security engineer,
so I hope I don't butcher the answer.
But the way that I always look at it is like static security testing,
the code's at rest, right?
So I'm going through the way the code is written, how my methods are interacting, and I'm looking for bad patterns, you know, where, you know, anti-patterns around how the application is compiled and I'm using the application itself to inject,
like you said, SQL injection
or trying to get the application to spit out codes
or do something it shouldn't do to expose a vulnerability.
So Adam, any other testing that can then be done
before an artifact hits production?
Anything else we can do besides the dynamic testing that we just talked about?
Yeah, I mean, obviously the static security scanning at build time being able to do code smells um being able to um do the open source scans
like all of that stuff should be done maybe even some risk profiling and then the other thing here
is is like secrets management so making sure that IDs and whatnot aren't hard-coded
or being leveraged improperly.
Yeah, so there's probably a whole suite of other things that I'm missing,
but I think those are the basic stuff.
Yeah.
There's an interesting discussion we had.
We had Kelsey Hightower on the previous podcast,
and for those that, you know,
I'm sure you know Kelsey, the Mr. Kubernetes.
And he's also, you know,
talking more and more about security.
And I thought part of the discussion that we had,
because I asked him,
so how does he see the future of Kubernetes?
Will everybody just have a big Kubernetes cluster
in production?
And then you separate your stages in that cluster,
like dev stage production.
He said, no, no, that doesn't really make sense
because you obviously need to test
all your different version updates beforehand.
And not only that,
you also need to test for security vulnerabilities
you bring in not only by, let's say,
upgrading your Kubernetes cluster to the latest version,
but also by upgrading your operators
because a lot of people, obviously,
on the Kubernetes world rely on operators
that automate a lot of the tasks
that has to do with lifecycle management of artifacts.
And I was wondering,
this was an interesting discussion.
Have you seen,
especially in the Kubernetes world,
your clients dealing with security
and if so, how?
Are there any things
that might be different
than what we just talked about
or is this just the same thing?
Well, I mean, I think it's a similar approach.
The key would be how you're managing your clusters,
you should have a similar pipeline to be able to manage your upgrades
and patches and be able to run different tests before integrating
those changes into the other CI process.
So that's the only thing is, in essence,
you end up getting like almost everything
has its own set of pipelines, right?
And the similar processes for running tests
and doing validation before it's being used by other people.
And so that would be the key is creating a process where those things are happening.
And then the engineering or the application team's pipelines are pulling from a repo that
has already been certified.
So that way, those changes have gone through testing prior to the teams leveraging them.
But Kubernetes, I mean, it definitely is, you know,
I won't talk to anybody about going to the cloud,
even if they're still on-prem, like the first step is containerization and also modern, you know, re-architecting their application, perhaps,
to better take advantage of the containers
before actually moving to the cloud.
But Kubernetes right now is pretty much the standard.
Yeah, yeah.
And now moving to the final stage, production,
how do you deal with security there?
Is there anything in particular that links it also to the DevOps pipeline?
Is there anything where any best practices, anything you've seen out there,
how you can also, let's say, first of all, obviously,
automate the detection of any security issues in prod as part of a delivery pipeline,
but also feeding it back to pre-prod for the next cycle
so that you can continuously innovate and continuously improve your dev test SecOps
practice?
Well, I mean, you definitely need to be doing monitoring and logging.
There should probably be like recurring vulnerability scans on the apps and on the infrastructure itself,
looking for compliance issues, intrusion detection, right?
I mean, there's definitely things that you can be doing in a production
from a monitoring and operation perspective.
And yeah, I mean, at the end of the day, you do need to take these things back. You
do need to be able to integrate them back into your pipeline as new gates or new tests in order
to close the loop. Being able to take those logs and be able to drive different new tests off of those is important.
But yeah, I think in today's world, there's so much more happening in production, you
know, through telemetry and other types of tasks.
I mean, that's how people are moving as fast as they can
is being able to move things into production
and then be able to monitor and react very quickly,
whether it's canary builds or blue-green or other items.
And also, and the audience doesn't know,
but we had a side chat earlier
and I brought up our open source project,
Captain, where it's an event-based system
for continuous delivery, quality gates,
and also auto remediation in production.
One of the first adopters
and one of the teams actually was very interested
in Captain internally within Dynatrace
because we also use Captain internally, was our security team trying to figure out how easy how to how to
integrate you know these different types of security checks both in pre-prod meaning as
quality gates where captain will be able to pull data out of the security testing tools and then enforce quality gates,
not only based on your functional metrics
and your performance metrics,
but also based on security assessments.
But then the other use case was also,
what if we detect a security vulnerability in production
and how can we then automatically remediate
that security vulnerability?
And depending on the vulnerability, obviously, there's different actions to take.
But I would be curious to have a probably follow-up discussion with you.
And I know you said you have some other folks that are a little deeper in the weeds on the
different toolings and the best practices and how we can potentially also integrate
Captain better with other security approaches. i think that would be awesome yeah it definitely
sounds like a really valuable tool um we have something um it doesn't do the security remediation
but uh we have a open source tool called report portal. It's reportportal.io.
And it's not branded EPAM,
but it is one of our largest open source tools in the market.
And it uses machine learning to help triage test results. So it was born out of the need to triage failed UI automation.
So as we all know, UI automation is very flaky.
And so we wanted to dig in more to natural language processing.
And so we used Elasticsearch and we were able to build out this engine that you could plug in your functional automation tests, and it can learn how
to recognize a framework issue versus an application issue versus a network issue,
and really speed up the defect triage process. So in the last year, we've extended this open
source tool to be able to do the same thing for security tests.
So we have clients now that will plug in their security test results into Report Portal.
And then Report Portal's natural language processing engine can start to determine what's a real failure versus what's a false fail.
And then integrate back into your CI tool, as well as your agile
management tool, build create tickets and stop the process.
When there's an issue that recognizes that it needs for, you know, has the need for inspection.
And the whole purpose is being able to use report portal as a quality gate.
So yes, or the maybe there's an opportunity to look at that,
but I definitely would like to see what you're doing with Captain as well.
Yeah,
that's awesome.
I mean,
our approach in Captain is that you are defining your SLIs,
your service level indicators,
and then your SLOs,
your objectives.
And then the SLIs are tool independent.
So you can,
you can pull in data from any type of tool and then we automatically validate
the calculator score for the deployment
and then decide whether it's good
or not. It seems like
we should definitely talk. That's for sure.
That's pretty cool.
Well, reportportal.io.
Yeah, and I don't want to
diminish Captain at all. Reportportal
looks really cool because I love Captain. I've been playing with
a lot myself, but I do want to make a suggestion to the
listeners that every time Andy brings
up Captain, it's turned into a drinking
game.
Captain is
wonderful. I'm not trying to put it down at all.
As you know, Andy, I spent a good
weekend messing around with it.
But yeah, every time
we're on a new episode, I'm like, when's it coming?
When's it coming?
That's great. That's how you get the word out, right?
So yeah, this report portal looks really, really cool.
I just popped it up while you were looking at,
while you were talking about that, Adam.
Hey, and Adam, I know also before we started the podcast recording,
besides Report Portal.io,
you also mentioned another open source project of yours.
Carrier or something?
Yeah, so it's called getcarrier.io. And getcarrier.io is a
platform to help get you started in this journey. So it takes advantage of your cloud infrastructure
and being able to spin up. You can use it for performance tests and security tests.
So basically you hook it into your CI. It allows you to spin up your security tests
or your performance tests, run those tests,
and then collect all the data in an influx DB.
And then you have Grafana dashboards.
It also comes out of the box with a report portal.
So you have that capability as well.
And this is how we're helping get clients
up and running with security and performance tests into their pipeline by being able to
install this platform and then start building out tests and starting the execution. So
it's been like, we call it an accelerator because it really helps get you over that initial hump of getting things installed and figuring out how to integrate them.
So here's all the most popular open source tools already integrated into a platform that you can just start to execute.
You just need to have some tests run.
That's awesome. I mean, it sounds like this covers a similar workflow and process
what we are doing with Captain on executing tests
and then capturing the results and then deciding on whether it's good
or not good build, basically dynamic build validation.
That's awesome, yeah?
Yeah, I mean, that's the key.
When people originally started
putting security tests into the pipeline,
you know, they would kick off the tests,
but usually it's a separate pipeline,
separate process,
because someone would have to go through
and figure out like,
okay, what is all these things?
And that made it slower.
It wasn't in the critical path.
And then, you know, starts to become a little bit more optional. And so there's been a lot of time
spent, like, okay, how do we make this process more automated? So that way, it can be, you know,
executed in that same, the same process. And it sounds like a lot of what you're doing is similar again drawing
another parallel to the performance side where we had the old-fashioned waterfall big bang
performance test at the end of the cycle which became a huge liability so companies started
foregoing that a lot of times we're trying to sneak builds past the performance team you know
sneak them out there and and then pay the price when performance took a hit. And over time, how we've seen the performance practice change is going into much
more smaller targeted testing, how to integrate that into the whole pipeline process so that it
doesn't dramatically increase lead time. I feel like there's a strong parallel there with security testing where, you know,
normally, as you even say, a lot of people put it in the non-critical path or it's something
that really slows down.
And what it sounds like you all are doing is helping organizations find a way or, you
know, to implement those best practices, to work it into their pipeline and their workflow so that it does
not become a huge end of the road test with a lot of, you know, unknowns or, you know, gigantic
report that you don't know which to address, putting it in at the right spots, making it
actionable. But so that's going to help, I think, adoption of security. But I'm curious from your experience, how many people are still coming into
let's reevaluate our security testing after they have an incident versus how many organizations
are being proactive and saying, we want to take this seriously before anything happens,
let's start doing it right. Is there a trend changing there at this point in time? It definitely is starting to change.
Unfortunately, I think it's only now starting to change.
I am seeing more RFPs come down the line that have security called out from the start.
And those are companies that haven't yet, God forbid, had, you know, an incident,
but, uh, there's still many companies that are waiting. And, you know, if you're,
if you're a financial institution, um, I think you've gotten the message at this point.
Um, but I think the other, other industries are still, um, like hedging their bets.
And we keep hearing about them there was just that
other one that uh what was that that company with all the facial recognition or the photos uh they
were tied to facebook with the um they're doing a database of facial recognition it was just in the
news like two weeks ago um but they got hacked and their huge database of people and all their
pictures were stolen and yeah wonderful stuff yeah i mean listen at the end of the day and we
talked about this in the very beginning at the end of the day like this takes an investment this is
this takes this takes time and engineering it takes dollars from the business to put this stuff in place. And so, unfortunately,
it's still somewhat of a prioritization call for the company to make this investment because
it's not as clear. Again, it's the same conversation that we had with performance testing. You know, it's not as clear of a ROI, you know, for their other features as secure testing.
And so, again, depending on who's making those calls and how they prioritize, like it could be
something that's just not in scope. It's very similar, like a car analogy, right? Where you
can build a car that goes 100 miles an hour and everyone loves it but your performance testing is well what happens if it's raining outside you know can it can it
perform in the rain and security is you know do you have airbags in it do you have tires that
aren't going to rip apart do you have seat belts and these are the non-sexy parts about designing
a really cool car just like everyone wants to create great software and put it out for everyone
to use but part of great software is it being performant and it being secure.
And those are the pieces that get no glory and that also slows down the process.
And I think where the industry is going right now is trying to find ways,
and they're successfully finding ways, as that of it with the change of performance testing
and what you're doing with security as to how to
bake that in with
much less of an impact as the traditional
ways took.
Awesome.
Hey Adam, is there any
final advice, any
final thoughts on if people
get started down that route of
dev test SACOP
practices that people need to know about?
The first thing is just to start small.
Static security scans are pretty straightforward.
You can build it.
Sonar, Cube, if you're doing unit tests,
you can start to build some of this stuff out.
Just start working pieces of it into your process.
And then you can start to move on from there.
I think the other piece is bringing security testing into the conversation.
If there is some other group that's responsible for it, bring them in.
Start setting expectations to them that, hey, like we're no longer going to be
dependent on you to do this type of testing. So we need you to help integrate this into this
process so that the teams can be more accountable. So to me, that's the two things I would start
with. Get, start small, start getting integrated into, into the pull request process. And then
start having some conversations around how to, how to make it more automated and make
the team more accountable.
Cool.
That's amazing.
All right, Brian.
Yes.
Are we summoning the Summaryator?
I think we're summoning the Summaryator.
It's been a while.
It's been a while.
I know.
And I keep it short though, right?
Are you ready?
Do it now.
I am ready.
Are you ready?
I am ready.
All right.
So folks, what I learned today, and Adam, this is just the way how it works in the end.
I'll try to summarize what I learned.
And the DevTestSecOps practice is not just another cool marketing term that nobody understands what it really means.
Or people have a lot of different opinions, like with DevOps.
I think that's what we learned today. But joke aside, I think what I learned today is that a lot of organizations still struggle with not knowing where their inefficiencies are.
And that goes beyond security.
And I really liked what you explained in the beginning that bringing sometimes in an external group, company, you know, a personal consultant in doing a value stream analysis and then figuring
out how the processes works, how they can be automated and optimized.
And then also thinking about how security can be baked into the process, but not like
in the old days, like we did it with performance where it was always at the end of the cycle
and the big bang performance test, even though we didn't have time anyway.
And at the end, everybody just gave the goal because we had kind of the milestone to reach.
So I think just as we have done with performance
over the last couple of years, where we shifted it left,
broken it down into smaller pieces
and automated into the pipeline, into the mindset of people,
we have to do the same thing with security.
I think we have to become more secure aware.
We have to educate ourselves
to become secure aware organizations,
development organizations.
And thanks for the overview
of all the different options
we have to integrate security
into our delivery pipeline,
whether it's static code analysis,
threat modeling, vulnerability analysis,
dynamic security testing.
There's a lot of things you mentioned today.
And also very happy that you mentioned your open source projects.
Report portal.io and getcarrier.io.
Would love to talk more with the folks behind these projects on your end.
Also how we can integrate it into our open source project captain.
Because I think together
we can come up with the tooling
that will make it as easy as possible
to automate the delivery pipeline
and integrate the necessary testing
and all different aspects,
security, functional,
and performance into the pipelines
so that bad code,
whether it's bad performing
or bad secured,
will never make it into production.
I think that's what I learned today.
It's pretty awesome.
Well, thank you, Samuraiter.
I want to add one thing I learned today besides all that,
just adding to a growing list of interests,
coming from a performance testing background.
We recently were speaking with Adrian Hornsby about resilient systems and
chaos engineering. So that sparked my interest in terms of, if you're in the testing realm,
when you then look at testing for resiliency, that's still a frame of work of testing. There's
that sort of scientific approach to let's set up things and test for it and then correct the results.
And I think this security side of the testing adds to that.
So if you're in any of the are quite a lot of new things that
people are finally starting to adopt seriously that all seem very exciting to me. So hopefully
they sound exciting to other people. Plenty to learn and plenty of ways to grow and further
your career by looking into some of these other things and becoming part of that team.
Not only will it just help your organization as a whole, the more you're aware of of these other things and becoming part of that team, not only will it just help your organization as a whole,
the more you're aware of all these other things,
but you might find yourself more attracted to one of these other areas that
you may want to grow into.
So always worth looking into and learning something new.
Adam, thanks a lot for being a repeat guest three years apart,
but it was great that you came back.
So thank you very much for being here again today and letting us,
educating us on some of the security stuff.
Yeah. Thank you for having me. I appreciate it. The opportunity.
Great. Hopefully we'll talk to you soon and maybe not three years.
Well, maybe not three years, but yeah.
Anytime you got something new or, you know,
one thing we love and we're going to try to get some more on.
We have a guest coming up in a few episodes.
He's going to tell us more.
Any good war stories.
Obviously, you can't talk about any specific organizations.
But if you have any interesting stories from the field, we'd love to have you back on to find out what happened, what lessons learned were, and, you know, how people can maybe avoid or learn from an example that
might have happened, might have encountered in the real world.
So if before three years something cool comes up, definitely reach out to us.
We'd love to have you back on.
Awesome.
Thanks, guys.
Thank you.