PurePerformance - The State of Cloud Native Security with Anais Urlichs
Episode Date: July 11, 2022Security is everyone’s business. And as everyone seems to be moving to Cloud Native it's important to understand what the security landscape in k8s, containerized apps, serverless, … looks like.To... learn more about this we invited Anais Urlichs (@urlichsanais), Developer Advocate at Aqua Security and CNCF Ambassador of the year 2021. Over the past years Anais has educated thousands of people on cloud native, devops and security on her YouTube Channel.Tune in and learn more about the different approaches to security in cloud native, which open source projects are out there and how her advise on embedding security in your day2day work.Some additional links we discussed can be found here:Anais on Linkedin: https://www.linkedin.com/in/urlichsanais/Anais on Twitter: https://twitter.com/urlichsanaisTrivy: https://github.com/aquasecurity/trivyWeekly DevOps Newsletter: https://anaisurl.com/WTFisSRE Talk: https://www.youtube.com/watch?v=0zL61AiOaK0Anais’s YouTube channel: https://www.youtube.com/c/AnaisUrlichsAqua Open Source YouTube Channel: https://www.youtube.com/channel/UCb4mfRT5UWpjoUQRcIE2qOQ
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always we have with me my wonderful co-host Andy Grabner.
How are you Andy?
I'm good. I always try to make you laugh, but you never laugh. I just don't look at your picture. I look elsewhere.
This way I don't have to see it and it doesn't hurt my eyes. Yeah.
No, but I'm doing good, except I am, we have, I think,
the third heat wave now in June already.
Oh, yeah. And it's getting a little, yeah, it's like I'm sweating.
It's not even August yet.
Not even August yet.
It's just the end of June.
Yeah.
So, I don't know.
Somebody needs to do something about this.
Buying an air conditioner.
We all can do something about this. Buying an air condition is not as easy.
We all can do something about this.
Exactly.
It's all on us.
And you know what we can also do, all of us?
We can all focus on security problems.
Ah, you know, because global climate change, whatever you want to call it, that will become a security problem.
And we can see, yeah, it's becoming a security problem for
the Pentagon's recognizes. So I think this is
a fantastic segue, Andy.
You are so, so
amazing. I just, I don't
know how Gabby deals
with how awesome you are.
It must be really hard.
I don't know. Ask her.
Which actually reminds me, we also need to get her on the podcast.
But today, we don't have Gabi,
we have Anais,
and I hope I correctly pronounced it somehow.
Yes, Anais.
Anais, perfect.
Anais, for me, it feels like
we have yet another star on the show
because you're very active on YouTube.
You have a great channel
where you talk about the 100 days of Kubernetes. You have a great channel where you talk about
the 100 days of Kubernetes.
You do a lot of talks around GitOps, DevOps.
The reason I'm saying yet another one,
we had Nana from TechWeld by Nana on as well
a couple of months ago.
And it's really awesome what you're doing
and how you educate the communities.
So thank you so much for that.
Thank you. Yeah, great. awesome what you're doing and how you educate the communities so thank you so much for that thank you um yeah great but i want to give you the chance to introduce yourself like who are you because some people may not know you i want to talk about nana and also she is
yeah so i am uh or the developer advocate open source developer advocate at Acro Security.
Before that, I was working as a reliability engineer for a smaller startup in the UK called Zivo.
And before that, I was kind of changing around a lot as a contractor.
So I got started in the cloud native space at the end of 2020 when I was looking for something new,
for exciting new technologies that I could learn about and document.
And that's how I ended up in this space.
And I have a YouTube channel and weekly DevOps newsletter.
So if you're interested in that, check out the links, I guess.
Yeah, that sounds somehow familiar and kind of also a little bit in parallel with Nana actually, because she,
why she started with Kubernetes at A1,
which is the large telco in Austria here.
And then she took a lot of notes in the early days of Kubernetes.
There was not just great documentation.
And then she converted all of that into YouTube videos. And that is great because you basically say,
I struggled with a certain technology.
I figured out a way how to do it.
Now let me educate others.
And that is pretty awesome. Yeah. That's, that's the kind of the approach to say, Hey, I'm out a way how to do it, now let me educate others. And that is pretty awesome.
Yeah, that's kind of the approach to say, hey, I'm learning about it, multiple other people are
likely learning about it right now as well. Let's just write it down the way that I'm approaching
it. And it doesn't only help people who are getting started, but I found also maintainers
of open source projects, it helps them quite a lot to improve the project if they see how newbies use it.
And then the reason why I invited you to speak,
because I saw you present at WTFSSRE
and you had a talk about cloud-native security.
And I was just fascinated by it
because I think security is everybody's job these days,
like writing in
general good code and performing code and actually coming up with software ideas that actually make
sense and provide value to our communities and societies. But can you tell us a little bit about
what gets you excited about the security space? What's happening right now?
What do you show and what do you educate people on when they ask you about the security space? What's happening right now? What do you show and what do you
educate people on when they ask you about application security? So I predominantly talk
about open source projects or when people ask me about client-native projects, security-related
projects, I talk about open source projects in this space. And generally what gets me excited
is that those tools becoming more and more accessible
to everyday engineers, everybody, basically.
A lot of people still perceive security
as being this really niche field
or that you need to have previous experience
to get into or to understand what's going on
or to use security-related tools.
But a lot of the security tools in this space,
in the cloud-n native space, security scanners
such as Trivi by Aqua Security or OpenSource security scanner is actually becoming a lot
more accessible to any engineer, anybody who's using any form of engineering resources.
And that's something that is really interesting to me, like not coming from the security, from a security profession.
I still want to make sure that my workloads, my applications are secured.
I want to understand what's going on in there,
what issues some of the dependencies that my application has might be having
and how I can fix those issues, right?
So yeah, that's definitely one component.
Yeah. Do you see then, because I So, yeah, that's definitely one component.
Yeah.
Do you see then, because I just, I think you just said you came off a webinar.
I also just came off a webinar focusing on application security,
where it was all about vulnerability detection.
So are vulnerable libraries being loaded?
I know there's different aspects of security scanning,
because you mentioned scanning, right?
One of them is, I guess, scanning during build time.
And then there's also, I think, EPPF is a big topic where you can use EPPF to then figure out
does an application access certain system calls
and maybe does things it shouldn't do.
You mentioned Trivi.
What does Trivi do?
How does Trivi work?
So Trivi is our open source security scanner from Acra.
And you can use it in multiple different ways.
And I'm saying all-in-one security scanner because it really made this move just in the past
months, past week, past weeks, actually towards all-in-one security scanning.
Before it was largely focused on vulnerability scanning.
So what kind of vulnerabilities are in your container images,
in your packages, and also misconfiguration scanning. So within your infrastructure as code
configuration, within your Terraform configuration, what vulnerabilities, what misconfigurations are
in there, how could you improve those configurations to become more secure, to make
your deployments more secure. But now it's also providing other features, such as generating SBOMs, software bill of materials,
off your workloads.
And also Trivy has another component
called the Trivy operator, which is a Kubernetes operator
that you can install in your cluster.
And it does continuous automated scan of your workloads.
So it's not just scanning then within your cluster,
you're running workloads for vulnerabilities
and misconfiguration issues,
but also for any exposed secrets
that might be within your cluster,
as well as RBAC scans within your cluster,
within your different resources.
So you just mentioned eBPF, for instance.
eBPF is really like another solution.
And we are also having an open source tool called Tracy
that uses eBPF under the hood for runtime security.
And it's really like on a lower level, I would say,
in comparison to 2v2.
It's really just running within your cluster
and it's providing you continuous scans of your workloads.
But it doesn't provide you continuous scans of your workloads, but it
doesn't provide you an analysis of the actions on those resources. So a lot of eBPF tools that will
continuously monitor what kind of changes and actions are being taken on your resources and
maybe alert you if specific actions are taken, specific resources are being modified, versus 3D performance continuous
scans on the static resources within your cluster. So whenever there are, for example,
new deployments. So it's on a much higher level, and it's, I guess, more accessible in that way
to a wider audience of cluster operators. I wanted to ask, Andy, you're both mentioning
eBPF. We've talked about that before, but more from a performance monitoring capability, correct?
So what I'm hearing is eBPF, eBPF can be used from a security point of view as well.
That's okay.
News to me.
I haven't heard that before.
I don't use eBPF in any way, shape or form, but it's come up on the show a few times with
people looking into going like really deep on what's going on under the hood.
So that's just fascinating to me.
Yeah, so with EVPF, the main applications that I found is observability of the performance of your workloads.
And then networking is another big part, right, by projects such as Selium.
And also runtime security and forensics of what are the activities that are taking place
within your cluster, also on the node level,
not just on the deployment level.
And the thing is, with eBPF, you can
do the monitoring in a more performant, efficient way,
versus if you, for example, have set containers running with each deployment that
does certain monitoring tasks, it would be, I guess, less effective for certain applications.
Yeah.
I saw, again, coming back to DevOps Days Amsterdam, Liz Rice, she is from the Cilium project,
and she actually did a keynote and showed how to use eBPF to get,
for instance, you know, I think the way eBPF works, you basically specify a program that
should run when the kernel is executing a certain before or after a certain system call is made.
So when an app is, let's say, making a file open request or a file write request and she was showing how to use eBPF to figure out
which process is currently accessing the password file and then you can do like is this should this
really happen yes or no and you can actually then also see parameters that are passed to these
system calls and therefore you can also do all sorts of things like who is accessing functions
that shouldn't be accessed are there there any vulnerable parameters, patterns being used?
And I think she also showed one interesting thing where she was even then blocking calls, right?
Let's say, hey, this process should never access passwords.
Therefore, I'm blocking that call or I'm just killing that process to make sure no detector is no longer there.
And this is basically also what we do with our project Tracy, which attaching itself to system calls in the kernel.
And you can specify through Rego,
which kind of processes should not be executed or which ones you want to be
alerted by. Yeah.
So that means you can specify policies. Is that what you do?
So you specify.
You can specify in addition to that. So we have basically as part of,
as part of Tracy,
we have a set of signatures, they're called,
that monitor the system calls.
So a set of signatures that are related to specific system calls and they monitor, for example,
if certain access tokens are used within your cluster
and things like that
and which kind of applications use those access tokens.
And then you can also specify additional signatures through Rego that Tracy is supposed
to monitor. But out of the box, we have a set of signatures that are more, I guess, that are more,
that have to be monitored. That we define as like, this should definitely be monitored that we define as like this should definitely be monitored yeah
um in our world right brian we have um kind of the the the good old performance patterns
these are patterns that we have seen years and years like we call we always talk about the n
plus one query problem where an application instead of making you know one call
it's in the loop and calling the database multiple times do you see this in the security space as
well like are there like if you do a security scan with a company you already expect that these five
things you see because nobody even though it's it's a well-known security problem, people are still not taking care of it? That's a great question.
I think generally that's one way how misconfiguration scans
are being configured.
So most of the tools that you find in the open-source cloud
native space, they have that scan misconfiguration issues.
They will have out of the box a set of configuration issues
defined that they are looking for, whether
that's in your Docker file, in your Terraform, in your Kubernetes manifest that they will
check for something that is continuously done, I guess, wrong by users or that continuously
is something that a lot of people get wrong.
Yeah. a lot of people get wrong. So I think that's how those security scanners are ultimately being developed based on what
do security researchers actually find people do and do wrong.
And that's also something that drives our open source tooling is the security research
at Aqua Enterprise.
Like at the enterprise level. They have their own research
team where they deploy honeypots and see, okay, what strategies do hackers use and what are some
of the common issues that are being deployed and vulnerable, for example, images and other
resources. And then based on their findings, that will kind of determine the way that a lot of the
security scanning tools will evolve.
Can I ask you a question about this honeypot thing?
Because that just blew my mind.
So you're saying what they'll do is they'll put out basically a treasure chest somewhere out in the internet
and just observe how people hack into it and then use that data.
Yeah.
That's pretty cool. and just observe how people hack into it and then use that data. Yeah, yeah.
That's pretty cool.
That's their main, like from what I've understood in the past month since I joined Aqua, that's their main objective,
that they really put out vulnerable container images and other resources.
And they obviously keep an eye out
on specific hacking groups
and other entities
that might try to exploit those resources
and then try also to make the resources
especially attractive to them
to see how they are going to target those.
That's fascinating.
I mean, how else are you going to get that data?
I just never... It's a completely new thought. What an amazing way to do that. That's fascinating. I mean, how else are you going to get that data? I just never.
It's a completely new thought, and what an amazing way
to do that.
That's pretty cool.
The only question that I would then have,
how would a hacker know that this is, I mean, honey, boy,
I guess, is there like a, you put something out
and you announce it, and like, hey, please hack it?
Or is it more subtle, you say i don't know
just wondering how this works i guess it's ideally more subtle right that's um to to get the real
world data exactly yeah because because i don't want to trick them right yeah you want to trick
them yeah exactly huh it's like when everyone every once in a while you hear about, there'll be a tweet from a police station
and it'll be, hey everyone, we found like a kilogram of cocaine.
If it's yours, please come claim it.
And like somebody, some idiot will randomly show up and then we're like, okay, you know,
similar kind of vibe I get from that.
I mean, I guess you can say, hey, we are a certain agency and we're currently dealing
with a security issue, even though you know, don't, but then it means hackers would be saying, hey, we are a certain agency and we're currently dealing with a security issue, even though you know it don't.
But then it means hackers would be saying, hey, this is cool.
They're dealing with a problematic situation right now.
So let's exploit that.
That's pretty amazing.
Yeah.
You go ahead.
It allows them to put out vulnerable resources
that are similar to maybe other resources that are commonly used in the space,
other open source resources that are commonly used in the space, right?
But you wouldn't want to proactively try to attack those existing resources that people use necessarily, right?
So it provides an alternative strategy to see, okay, how can those resources be exploited by monitoring how they're actually being exploited
in other words chaos engineering for security and production is not a good idea
but you do chaos that's almost like a staging chaos engineering for security a good way of
putting it yeah that is actually a good segue question now is this a thing have you heard
i mean do you do some chaos engineering in a, let's say, secure environment
where you can then actually test does your security detection actually work
and how do people then react to a security problem?
Is this a thing?
So I personally don't do that.
What I do is I try to replicate security issues
that I find online so for example
within your docker
containers or within your kubernetes
clusters
I try to replicate those
as part of my learning
journey learning experience
yeah but
it might be interesting
to extend that
because in the end right if you think about it,
we had a log4shell happening or log4j happening,
and it was like this event, but then all of a sudden,
you know, a lot of people didn't, I guess,
a lot of organizations didn't know how to react to it.
And therefore, one of the things that they did,
they put all of the resources on this particular problem.
But then it's like if you would
practice these things on a continuous basis right you actually optimize your processes in detecting
vulnerabilities figuring out who is really impacted and then kind of getting the right people focused
on the on the work to fix the problem i think this would make much more sense but this means
you would need to constantly kind of exercise and this is all the idea with chaos engineering you constantly bring chaos to your environment and then you can
test your monitoring your alerting how people react and so maybe like really on purpose injecting some
insecure like vulnerable data yeah i guess and something that I just highlighted in the previous webinar, that a lot of times security is perceived as a reactive task, right?
So I'm not proactively trying to have security vulnerabilities within my cluster.
A lot of times I'm just reacting to those that maybe my scanners find or that if there's time for it, we can identify.
But if not, and if there's nothing identified,
oh, there must be that there's nothing there to identify, right?
That's mostly the approach a lot of projects or companies might take versus with monitoring and monitoring your solution.
It's something that's very proactive that you already have in place,
the tools, and you can be notified if there it's something that's very proactive, that you already have in place the tools.
And you can be notified if there's any services that's
down, or you could try to imitate a lot of the scenarios
that you might find that bring services down,
which would give you the experience,
versus with exploits.
It's something that is a lot of times very reactive.
And especially if there's now nobody whose responsibility is that to do on a
regular basis, then it's,
it's nothing that surprises me that when the situation comes up where
something has to be fixed very, very quickly,
there's nobody with that experience who has already a process identified on how to best do it.
Yeah, that was kind of my whole thought with this as well, right?
Because the more you practice, the more people actually build up
that you have them available in case a real problem happens.
Hey, so you're doing a lot of education on YouTube and with your talks and webinars and
I'm sure many other things.
If you, I think security while Brian, right, we've covered it a little bit already over
the last couple of months.
It is definitely a more newer topic because for us at Dynatrace, it's also a new topic
because we also have a solution on it. But if you're approaching a new audience that you think should be, you know, be more,
made more aware about app security or security in general, and then the tooling that is out
to the practices, what would you give somebody that says, you know, like curious, curious as
you were a couple of months ago, what advice do you give them? Obviously, besides watching your videos,
but what else is there that people need to know
when getting started?
I think a lot of it is really dependent on,
or I would generally suggest people
to not try to understand the theory and the details
of a cloud native security or security solutions in general,
but really try to use tooling related to security and in relation to your existing, for example,
your existing application stack, because that will give you far more, I would say,
tangible outcomes.
A lot of times when you focus on the theory behind what are those exploits, what do they do?
How does our cluster work under the hood?
How does the different components within Linux work together?
And how is it ultimately exploited?
We can get lost very easily and it can get very easily very complex.
And that's something that I think is really unnecessary, especially at the beginning when you're getting started with security tools,
because ultimately, container exploits are at every level of our application stack.
It's not just at the container level.
Containers can be exploited at lower levels as well.
And focusing on all of those different components might get easily overwhelming.
So I would really suggest people to look at the security tools that are out there and try to get started using them and understanding,
OK, once they give you information, what does information mean?
What can you do about it? How can you adapt your application accordingly?
And that will give you, I guess, more quicker and more tangible results to understand what's going on and how to react to it.
What do you think about responsibility in terms of,
if you look at security, there's different layers of security.
Where does one start and where does the other end?
Because I could bring up the question as a developer and say,
why should I care about security
if the network guys do their job correctly,
then we shouldn't have any problem.
How can you, I mean,
so I think it's a multifaceted question.
First of all, how can you convince people
that everybody has to care about security
everywhere along the stack,
but really then where are the boundaries?
Yeah.
So, I mean, I completely understand the developer you were just introducing,
their frustration from that perspective,
because a lot of times when we talk about shifting left,
shifting deployments and security left,
then we could end up just saying, okay,
the engineers have to do yet another task that they are not originally being paid for.
That's not originally their job. So it's just something else and it on top and that's something that's especially crucial security that if it's a highly reactive task if it's just added
on top of somebody's existing responsibilities it will easily be ignored because it's just yet
another thing to do so if there's nothing to do then i'm not going to do it i'm just going to
focus on the rest right and um so that's something that I would be mindful about if I'm trying,
like if I'm implementing security strategy within my team and if I'm assigning ownership of,
of the security of our resources and who is supposed to maintain security tooling and et
cetera. Now, the other thing you mentioned that if another team would just do their job,
we wouldn't have any security issues.
That's not the case.
Ultimately, everybody who is dealing with any form of coding-related,
engineering-related resources will run into the issue of having to choose
one resource over another, one base image over the other base image,
one library over the other library.
And we'll have to make a decision of why
one should be chosen over the other, which person should
be chosen, and similar.
So it ultimately can only be effective
if everybody is taking ownership of some aspect of it.
Now, that means that everybody should
take ownership of everything.
And that's also the issue when you have a specific security
team that everybody else within the team or the organization won't be empowered to look into the security aspects of their tooling, of their application stack.
And then it can easily just slip through and people can become overwhelmed when they are then tasked with fixing security issues as well.
So it's definitely, there are two components to it.
First of all, ownership should be assigned to everybody, but to different levels.
Yeah.
It is a pressing question, right?
Because, I mean, Brian, we see the same thing in other disciplines too.
Performance, same topic. Yeah, it's all about changing the culture at an organization
to get people to be invested in it.
It's tricky.
We'll talk to, you know, when we talk to developers
about possible automation tasks,
and they're like, well, I just have to get answers.
I know I can dive in and do this and spend a bunch of time.
And we're like, but you can automate all this other stuff
if you take a little time, which then frees up.
And in every level of life, people get set in their ways
and change is hard until they see a direct benefit.
And I guess that's going to be the trickiest thing with security
is finding a way for
the different team members to see the benefit right because how does how does it impact me
how does it impact my job or how is it going to make my job easier right um that's always the
question anna is do you see in organizations um an increased investment in security these days,
especially after Lock4Shell?
Like with an investment, I mean, I don't know,
sending more people to security conferences,
doing more security training.
Do you see something like this?
Or is it just equally important as availability,
resiliency, performance, or what do you see?
Yeah, so I can't directly comment on the trends of how effects such as,
or like issues such as Lab4Shell had on the security space
and how security aware people are or want to become.
Something that's definitely visible is the investment in open source security projects
in the cloud
native space. So there are more and more projects that are popping up and expanding their security
scanning. So based on that, I would say there is more interest and more focus by engineers.
And the other thing is that the entire, like it's something driven also by the cloud native culture itself that with GitOps, with other, I guess, ways of deploying and managing our resources.
As part of that, it's really easy to say, okay, security is going to become as well an integrated component into our stack and something that everybody within the team will take responsibility for two different levels
versus before when there were clear distinctions
between who's supposed to manage what.
Now it's really like an all effort to different degrees
that an engineer might as well also create configuration files
for the deployments versus the DevOps team themselves.
So what about do you do you think
penalties for security breaches should be much harsher?
I mean, one thing I see all the time, you know, every few months there's another
data breach, your your your data might have been accessed here will give you
a free one year subscription to Experian for credit monitoring, and then after that, you're automatically enrolled in something you have to pay for, which really is absolutely
no punishment for the company. Obviously, then you had a situation like SolarWinds,
which was a bit more severe because of the level it was at, but I don't think there was a direct
punishment to them except that their business got hurt severely. But do you think there needs to be a monetary incentive in terms of repercussions
for companies to actually start moving into this realm and taking it more serious?
Or are you seeing any kinds of trends towards people are starting to slowly adopting and
understand it? It's interesting how you express it, because when you say there should be a monetary incentive
or like in terms of, I guess, financial punishment for companies for not securing resources properly
and ultimately making user data, all your data accessible to people to exploit it.
Ultimately, companies that have a data breach, depending on the extent of the data breach
of the security incident, they will already feel monetary punishment or some companies won't even
survive maybe that security incident, right? Depending on the way they're handling it,
depending on the way they're reacting to it. And similar, some companies, depending on the
industry, might lose complete trust. For instance, if AcroSecurity as a security company has a security incident, then people will lose major trust in us, right?
So in that way, I guess companies are automatically being financially punished in that sense. think though is that companies should just overall provide more transparency when issues do occur.
I think there's a lot of cases that we might not even hear about when there are security incidents
because the companies just not, they don't want to be transparent about it. And I think that
ultimately hurts the users and the ecosystem overall long-term much more.
It's similar to companies that disclose incidences, right?
Like if your services are down, people will realize your services are down.
So the company is already forced to provide information.
I think we don't have the same processes right now in place for security incidences, it's very easily just cover them up
versus actually having being forced
by the user base to respond to them
because we don't ultimately know about it, right?
So I think that is kind of where
I would say there's more work needed
in the way of the expectations
and the way that publicly companies
are being held responsible
and being forced ultimately
to be transparent about security issues.
I guess you could say the fear of financial repercussion would keep them from speaking it out, because even if they were transparent, then they would open themselves up to lawsuits.
So then maybe if you take away that possibility, they might be more transparent, which would allow the community to help.
Yeah, I don't know. It's a complicated issue. I just didn't know if you had any thoughts.
But of course, I have an answer. I don't know it's complicated to show i just didn't know if you had any thoughts but of course i have an answer i don't yeah it's a great conversation discussion about
like how would you approach that how would you which company would get started and
why would they want to get started providing right i remember brian in the old days when we did a lot
of web performance optimization we always talked about companies that were, let's say, going down
or offline during Black Friday, right? And then even
organizations just, you know, they were, you know, everybody was talking about it. But I think
the difference between not being performant and not resilient
versus being insecure, there's a big difference.
Legality, yeah, exactly. I mean, there's a big difference. Legality.
Legality, yeah, exactly.
I mean, that's the thing.
Very hard to compare because back then there was a time
when people were just saying, hey, and we went down.
I remember there was one company,
they were so proud of being down on Black Friday
because they used the marketing opportunity to say,
hey, we are so popular, our users even brought us down.
We're so popular, we couldn't sell our stuff. We're so popular we couldn't sell our stuff.
We're so popular we can't make money.
I want to also be conscious of time,
but I do have a couple of more questions.
Because I just, you know, Google is my friend or whatever search engine,
and if I enter your name, there's a lot of things coming up.
First of all, congratulations to 10,000 subscribers to your YouTube channel.
Hey, when do you get the plaque?
Is that a million subscribers or?
A hundred thousand.
That's when you get the first plaque.
Yeah.
Okay.
We'll have to get you there.
Maybe this podcast can add five users to your following.
It's amazing. And then I also see
just one of the next results
is you're going to speak
at PlatformCon
or did this already happen?
No, June 10.
No, it actually just happened, right?
Yeah.
Incorporating security
into platform engineering,
the do's and don'ts.
Just a quick thing.
What is one do
and what is one don't? What is want-do and what is want-don't?
What is want-do? Where are my slides?
We can also link to the slides and say, if you want to know it, here's the link.
Check out the recording if you want to learn it. It's a really quick talk, 15 minutes,
but ultimately it's one of those things that
you want to make it as seamless as possible, the security scanning aspect of your resources.
And it's something as a platform engineer is trying to accomplish that ultimately Kubernetes
management for instance, is not this big thing that's so complex to do and it's so difficult
to understand and to master.
And with a lot of
security tools you can take a similar approach that at the high level it doesn't have to be
difficult to master to accomplish and i just really can encourage everyone to i will link to
all of the the social media properties that you're active on, whether it's Twitter and LinkedIn and YouTube.
What is next for you?
What's the next big thing?
Hopefully besides a summer break,
because I think we all deserve that. I wanted to say vacation.
Well, that is well.
So I think as a developer advocate,
you work towards having breaks from your content, from content creation in some way.
Right now, especially this month and May as well, I was giving lots of different talks.
And I realized that in between giving talks, you kind of need a break to catch up and create more content more demos and also do your own research based on which you can then
recreate new talks and understand as well what kind of material is needed within the space
and so that's that's next for me taking the summer to to do that and then there are some
more conferences lined up at the end of summer and yeah we'll see see i think now i know where
i missed you i think it was cubecon you went to valencia did you yeah i did yeah yeah and i think
that's why because i we were trying i think i was trying to um to at least say hi and then we were
so busy well you're at the boat party dynatrace for the most amazing boat party i know but the thing is somebody had to give
a conference talk at the same time i was doing the i was doing the lightning talk yeah at the
same time and that's why i skipped the boat party i want to say that's a pity but congratulations on
the talk so yeah but then i'm really now my last question because you said new content you're always trying
to see what is missing like what type of content what is i mean unless you don't want to reveal a
secret what you're working on but if you have picked up certain topics that need to be talked
about what is it i wasn't prepared for this question you You can also say, you know, I, you know, just follow my social media profile and you will see what the next thing is.
And Andy, stop asking these stupid questions.
Mind your own business, Andy.
Exactly.
I mean, the things that I think I need is really to show how security can be integrated into existing tooling. And it's
also something that I try to accomplish with my WTF SRE talk to really show one doesn't have to be
one or the other. It can be both combined. It's actually the goals between site reliability,
engineering, and security are highly overlapping, and you can accomplish both with similar tools.
So that's definitely something that I'm working towards more of,
providing more integrations and examples of how to integrate security tools
so it's not perceived as a separate thing that has to be managed
or being integrated somehow.
But yeah, ultimately, I really take things just week by week a lot of times with my content of when questions pop up or when I see that I'm not able to accomplish something that should be easily to do and that other people also try to do, then that's something that I'm working on.
Then if you're looking for a tool to integrate, now Brian, you know what's coming. There's a tool, a CNCF project called Captain.
One of our use cases is to integrate with security tools and use it as part of a release validation.
That means Captain is orchestrating your delivery and your auto remediation,
and it's triggered by the new build.
We deploy it.
We then ask the security tool is everything good
or not and we ask the performance tool is everything good or not and then we are coming
up with a score and based on that drive the um the process forward so we should look into maybe
getting a better integration from Aqua with Captain.
That would be great.
Yeah, I actually spoke already with some contributors from Captain about it,
but it hasn't happened yet.
Hasn't happened, okay.
Andy is a connector.
That's not the right word.
I'll try to.
I think just as you're right, I think Ana is in a similar role. What you do for Aqua, I try to do for Dynatrace and also Kepton, and really helping developers
to get more out of their investment in observability.
And then also, however, on the other side is really connecting the different tool vendors
because there's so many use cases that we can deliver better to our end users by combining tools the right way.
Now, did we miss anything?
Do you want to say anything that we completely forgot to ask?
Or are you just ready for vacation?
More calls to go.
No, I can't think of anything
yeah
it's great
that's good
I think I have most of the links
but if you have
you know anything else
just send them over
we will add them to the
the description of the
of the podcast once it airs
and yeah
I can only say thank you so much
it's always great to talk with people that
are active in a certain area because you know brian and i we always say the best thing about
that podcast is that we get to learn so many new things in so many different areas and yeah and
having people like you that are willing to to spend an hour with us. Thanks for having me. It's great, yeah.
And I have an interesting bit of content,
idea for some content,
maybe if not you,
because you don't need the extra work, I'm sure.
But when you talked about going back and doing all the preparation and research
and discovery for your next series of things,
it dawned on me that most people,
probably including myself,
don't think of all the pre-work that goes into everything people go into for
doing speaking.
Like,
it's just like,
Oh,
I'm doing work and I'm going to talk about something I know about.
No,
you're actually going and workshopping this stuff and be interesting to see
like a little short documentary on what actually goes into all this stuff.
I don't know.
It's just the thought came to my head,
you know,
you could say that's the worst thing I ever heard and I don't want to add
anymore. And I don't think you should necessarily, but it's.
It's actually something that I think is really highly applicable to
developer advocates who don't have their consistent day-to-day tasks,
right. And, and that they were working towards. Right.
So for us,
it's really consistently figuring out
what kind of integrations are out there,
what kind of integrations are missing,
what kind of use cases can they be built
and developed upon, right?
And then providing that knowledge
or those tooling with the necessary explanations
and details, I think, yeah.
You could do something like from zero from idea to talk
kind of like what's happening
talk about the talk
yeah
a little bit meta there
I'm going to go take and put all my financial
information now in a honeypot
and see what happens
be on the lookout hackers all my financial information now in a honeypot and see what happens.
Be on the lookout, hackers.
You can't hack mine. No, don't.
I'm kidding.
Thank you for being on.
We're going to wrap.
My name is Brian and I'm going to wrap the show.
Wow, that was amazing.
Thank you so much for being on.
We're getting slap happy, and it's only not even 11 a.m. for me yet, so I have a long day ahead of me.
Yeah, I'm over.
Yeah, more coffee.
Thank you so much for being on.
As Andy says, it's always amazing to have guests on and just learn this stuff.
I think we're all getting interested in this whole security aspect now.
So I really, really appreciate what you're doing. And just, you know, we, as people who like to help, you know, just even from having these shows, hopefully people are getting to learn
stuff. So I don't want to take a bunch of credit. I was like, Hey, we're helping educate people
because we're just having a fun show. But obviously some people are learning from it,
from us doing that to people like you and
others out there who are putting all this content out there and just sharing knowledge with people,
big thank you, because this is how we get everybody to learn, to get everybody to get better,
but also get everybody to develop the interest in these topics so that they can keep propagating and
security can keep improving and people will find, hey, I hate my job as a developer, but suddenly I
heard this thing about security and it's awesome.
And I now have a new spark to, you know, enjoy life again.
So because every developer is just sitting at a desk, miserable in my,
in my head, which is not true. I love you developers.
So thank you so much for being on and everything you're doing.
It's all the coffee. That's why I'm rambling now.
So, uh, so yeah. Uh, thanks for everyone for listening and enjoy your vacation and everybody stay cool this summer. Bye-bye.