Screaming in the Cloud - Trivy and Open Source Communities with Anaïs Urlichs
Episode Date: September 6, 2022About AnaïsAnaïs is a Developer Advocate at Aqua Security, where she contributes to Aqua’s cloud native open source projects. When she is not advocating DevOps best practices, she runs he...r own YouTube Channel centered around cloud native technologies. Before joining Aqua, Anais worked as SRE at Civo, a cloud native service provider, where she helped enhance the infrastructure for hundreds of tenant clusters. As CNCF ambassador of the year 2021, her passion lies in making tools and platforms more accessible to developers and community members.Links Referenced:Aqua Security: https://www.aquasec.com/Aqua Open Source YouTube channel: https://www.youtube.com/c/AquaSecurityOpenSourcePersonal blog: https://anaisurl.com
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by our friends at AWS AppConfig.
Engineers love to solve and occasionally create problems,
but not when it's an on-call fire drill at four in the morning.
Software problems should drive innovation and collaboration,
not stress and sleeplessness and threats of violence.
That's why so many developers are realizing the value of AWS AppConfig feature flags.
Feature flags let developers push code to production,
but hide that feature from customers so that the developers can release
their feature when it's ready. This practice allows for safe, fast, and convenient software
development. You can seamlessly incorporate AppConfig feature flags into your AWS or cloud
environment and ship your features with excitement, not trepidation and fear. To get started, go to snark.cloud slash appconfig.
That's snark.cloud slash appconfig. This episode is sponsored in part by Honeycomb.
When production is running slow, it's hard to know where problems originate.
Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally.
Why scroll through endless dashboards while dealing with alert floods,
going from tool to tool to tool that you employ,
guessing at which puzzle pieces matter?
Context switching and tool sprawl are slowly killing both your team and your business.
You should care more about one of those than the other.
Which one is up to you. Drop the separate pillars and enter a world of getting one
unified understanding of the one thing driving your business, production. With Honeycomb,
you guess less and know more. Try it for free at honeycomb.io slash screaming in the cloud.
Observability, it's more than just hipster monitoring.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
Every once in a while when I start trying to find guests to chat with me
and basically suffer my various slings and arrows on this show,
I encounter something that I've never really had the opportunity to explore further.
And today's guest leads me in just such a direction.
Anais is an open source developer advocate at Aqua Security.
And when I was asking her whether or not she wanted to talk about various topics,
one of the first things she said was,
don't ask me much about AWS because I've never used it,
which, oh my god, Anais, thank you for joining me. You must be so very happy never to have dealt with
the morass of AWS. Yes, I'm trying my best to stay away from it.
Back when I got into the cloud space, for lack of a better term, AWS was sort of really only game in town unless you wanted to start really squinting hard at what you define cloud as.
I mean, yes, I could have gone into Salesforce or something, but I was already sad and angry all the time.
These days, you can very much go all in on cloud.
In fact, you were a CNCF ambassador, if I'm not mistaken.
So you absolutely are in the infrastructure cloud space,
but you haven't dealt with AWS. That is just an interesting path. Have you found others who've
gone down that same road? Or are you sort of the first of a new breed? I think to find others who
are in a similar position or have a similar experience as you do, you first have to talk
about your experience. And this is the first time, time or maybe the second that I'm openly saying it on something that will be posted live to like to the internet before I
like I try to stay away from mentioning it at all to the best that I can because I'm at this point
where I'm so far into my cloud native kubernetes journey that I feel like I should have had to deal with AWS by now,
and it just didn't. And I'm doing my best and I'm very successful in avoiding it.
So that's where I am. Yeah. We're sort of on opposite sides of a particular fence because
I spend entirely too much time being angry at AWS, but I've never really touched Kubernetes
in anger. I mean, I see it in a lot
of my customer accounts and I get annoyed at its data transfer bills and other things that it causes
in an economic sense. But as far as the care and feeding of a production cluster, back in my SRE
days, I had very old school architectures. It's, oh, this is an ancient system, just like grandma
used to make, where we had the entire web tier and then a job application server
tier and then a database at the end. And everyone knew where everything was. And then containers
came out of nowhere. And it seemed like, okay, this solves a bunch of problems and introduces
a whole bunch more. How do I orchestrate them? How do I ensure that they're healthy? And then,
ah, Kubernetes was the answer. And for a while, it seemed like no matter what the problem was,
Kubernetes was going to be the answer because people were evangelizing it pretty hard. And now I see it almost everywhere
that I turn. What's your journey been like? How did you get into the weeds of, you know what I
want to do when I grow up? That's right. I want to work on container orchestration systems. I have a
five-year-old. She has never once said that because I don't abuse my children by making them learn how clouds work. How did you wind up doing what you do? It's funny that you
mentioned that. So I'm actually of the generation of engineers who doesn't know anything else but
Kubernetes. So when you mentioned that you used to use something before, I don't really know what
that looks like. I know that you can still
deploy systems without Kubernetes, but I have no idea how. So my journey into the cloud-native
space started out of frustration from the previous industry that I was working at. So I was
working for several years as a developer advocate in the blockchain, open-source blockchain,
cryptocurrency space. And it's highly similar to all of the
cliches that you hear online and across the news and out of this frustration I was looking at
alternatives one of them was either going into game development into the gaming industry or
the cloud native space and infrastructure development and deployment. And yeah, that's
where I ended up. So at the end of 2020, I joined a startup in the cloud native space and started
my social media journey. One of the things that I found that Kubernetes saw for, and to be clear,
Kubernetes really came into its own after I was doing a lot more advisory work and a
lot more consulting style activity rather than running my own environments.
But there's an entire universe of problems that the modern day engineer never has to
think about due to partially cloud and also Kubernetes as well, which is the idea of hardware
or node failure.
I've had middle of the night driving across Los Angeles in a panic,
getting to the data center
because the disk array on the primary database
had degraded because a drive failed.
That doesn't happen anymore.
And clouds have mostly solved that.
It's, okay, drives fail,
but yeah, that's the problem
for some people who live in Virginia or Oregon.
I don't have to think about it myself.
But you do have to worry about instances failing.
What if the primary database instance dies? Well, when everything lives in a container,
and that container gets moved around in a stateless way between things, well, great.
You really only have to care instead about, okay, what if all of my instances die? Or
what if my code is really crappy? To which my question is generally, what do you mean if?
All of us write crappy code. That's the nature of the universe.
We open source only the small subset
that we are not actively humiliated by,
which is in a lot of ways what you're focusing on now.
Over at Aquasack, you are an advocate for open source.
One of the most notable projects that come out of that
is Trivi, if I'm pronouncing that correctly.
Yeah, that's correct.
Yeah, so Trivi is our main open source project.
It's an all-in-one cloud-native security scanner.
And it's focused on misconfiguration issues, so it can help you to build more robust infrastructure definitions
and configurations. So ideally, a lot of the things that you just
mentioned won't happen,
but that obviously highly depends
on so many different factors in the cloud-native space,
but definitely misconfigurations of one of those areas
that can easily go wrong.
And also not just that your data might cease to exist,
but that the worst thing or like as bad
might be that it's completely exposed online
and there are databases of different exposures where you can see all the kinds of data, of information, from just health data to dating apps And I know, just based on that opening to an email, that the rest of that email is going to explain how security was not very important to you folks. And it's the apology, oops, well, it's crowded. There are a number of different services out there.
The cloud providers themselves offer a bunch of these. A whole bunch of scareware vendors at the
security conferences do as well. Taking a quick glance at Trivi, one of the problems I see with
it from a cloud provider perspective is that I see nothing that it does that winds up costing
extra money on your cloud bill that you then have to pay to the cloud provider. So maybe they'll put a pull request in for that one of these days. But my sarcasm aside, what is it
that differentiates Trivy from a bunch of other offerings in various spaces?
So there are multiple factors. If we're looking from an enterprise perspective,
you could be using one of the in-house scanners from any of the cloud providers available,
depending which you're using. The thing is, they are not generally going to be the ones who have a dedicated research team
that provides the updates based on the vulnerabilities they find across the space.
So with an open source security scanner or from a dedicated company, you will likely have more up-to-date information in your scans.
Also, lots of different companies, they are using Trivi under the hood ultimately or for
their own scans.
I can link a few or you can also find them in the Trivi repository.
But ultimately, a lot of companies rely on Trivi and other open source security scanners
under the hood because they are from dedicated companies. Now, the other part to Trivi and why you might want to consider using Trivi is that
in larger teams, you will have different people dealing with different components
of your infrastructure, of your deployments, and you could end up having to use multiple
different security scanners for all your different components from your container images that you're using, whether or not they are secure, whether or not they are
following best practices that you define to your infrastructures, code configurations,
to your running deployments inside of your cluster, for instance. So each of those
different stages across your lifecycle from development to runtime will maybe even need
different security scanners or you could use one security scanner that does it all so you could
have within the team more knowledge sharing you could have dedicated people who know how to use
the tool and who can help out across the team across the lifecycle and similar so that's one
of the components of that you might want to. Another thing is how mature is a tool, right? A lot of cloud providers, what they end up doing
is they provide you with a solution, but it's nicely decoupled from anything else that you're
using. And especially in the cloud native space, you're heavily reliant on open source tools,
such as for your observability stack, coming from site reliability engineering also myself I love using metrics and grafana and prometheus and anything open source from loci
to accessing my logs to grafana the dashboards and all their integrations I love that and I want to
use the same tools that I'm using for everything else also for my security tools. I don't want to have the metrics for my security tools
visualized in a different solution
to my reliability metrics for my application, right?
Because that ultimately makes it more difficult
to correlate metrics.
So those are like some of the factors
that you might want to consider
when you're choosing a security scanner.
The way that you talk about thinking about this
from the perspective of an SRE is,
I mean, this is definitely an artifact of where you come from and how you approach this space.
Because in my world, when you have 10 web servers, five application servers, and two database servers,
and you wind up with a problem in production, how do you fix this? Oh, it's easy. You log into one
of those nodes and poke around and start doing diagnostics in production. In a containerized world, you
generally can't do that. Or there's a problem on a container, and by the time you're aware of that,
that container hasn't existed for 20 minutes. So how do you wind up figuring out what happens?
And instrumenting for telemetry and metrics and observability, particularly at scale,
becomes way more important than it ever was for me.
I mean, my version of monitoring was always Nagios, which was the original call of duty that wakes you up at two in the morning when the hard drive fails.
The world has thankfully moved beyond that in a bunch of ways.
But it's not first nature for me.
It's always, oh, yeah, that's right.
We have a whole telemetry solution I can go digging into. My first attempt is always, oh, how do I get into this thing and poke it with a stick?
Sometimes that's helpful, but for modern applications, it really feels like it's not.
Totally. When we're moving to an infrastructure, to an environment where we can deploy multiple
times a day, right? And update our applications multiple times a day. Multiple times a day,
we can introduce new security issues
or other things can go wrong, right?
So I want to see as much as I want to see
all of the other failures,
I want to see any security-related issues
that might be deployed alongside those updates
at the same frequency, right?
The problem that I see across all this stuff, though,
is there are a bunch of tools out there that people
install but then don't configure because, oh, well, I bought the tool, the end. I mean, I think
it was reported almost 10 years ago or so on the big Target breach that they wound up installing
some tool. I want to say FireEye, but please don't quote me on that. And it wound up firing off a
whole bunch of alerts and they figured it was just noise. So they turned it all off and it wound up firing off a whole bunch of alerts, and they figured it was just noise, so they turned it all off, and it turned out,
no, no, this was an actual breach in progress.
But people are so used to all the alarms screaming at them
that they don't dig into this.
I mean, one of the original security scanners was Nessus,
and I've seen a lot of Nessus reports
because for a long time,
what a lot of crappy consultancies would do
is they would white-label the output
of whatever it was that Nessus
said and deliver that in as their report. So you'd wind up with 700 pages of quote-unquote
security issues. And you'd have to flip through to figure out that, ah, this supports a somewhat
old SSL negotiation protocol. And you're focusing on that instead of the, oh, and by the way,
the primary database doesn't have a password set.
Like it winds up just obscuring it
because there is so much.
How does Trivia approach
avoiding the information overload problem?
That's a great question
because everybody's complaining
about vulnerability fatigue.
Of them, for the first time,
scanning their container images and workloads
and seeing maybe even hundreds of vulnerabilities.
One of the things that can be done to counteract that right from the beginning
is investing your time into looking at the different flags and configurations
that you can do before actually deploying Trivi to, for example, your cluster.
That's one part of it.
The other part, as I mentioned earlier, you would use a security scanner
at different parts of your deployment.
So it's really about integrating
scanning not just once you like in your production environment once you've deployed everything
but using it already before and empowering engineers to actually use it on their machines
now they can either decide to do it or not it's not part of most people's job to do security
scanning but as you move along the more you, the more you can reduce the noise.
And then ultimately, when you deploy Trivi, for example, inside of your cluster, you can do a lot
of configurations such as scanning just for critical vulnerabilities, only scanning for
vulnerabilities that already have a fix available and everything else should be ignored. Those are
all factors and flags that you can place into Trivi for instance that make it easier now with Trivi you won't have
automated PRs and everything out of the box you would have to set up the actions or like the ways
to mitigate those vulnerabilities manually by yourself with other tools as well as integrating
Trivi with your existing stack and similar but then obviously if you want to have something more
automated if you wonder
if something that doesn't more for you in the background that's when you want to use to an
enterprise solution and shift to something like aqua security enterprise platform that actually
provides you with the automated way of mitigating vulnerabilities where you don't have to know much
about it and it just gives you the the solution and provides you with a PR with the updates
that you need in your infrastructure's code configurations
to mitigate the vulnerability similar.
I think that's probably a very fair answer,
because let's be serious,
when you're running a bank
or someone for whom security matters,
and yes, yes, I know,
security should matter for everyone,
but let's be serious.
I care a little bit less about the security impact
of, for example, I don't know,
my Twitter for pets nonsense
than I do a dating site
where people are not out of other orientation or whatnot.
Like there is a world of difference
between the security concerns there.
Oh no, you might be able to shitpost as me
if you compromise my last tweet
in AWS.com
Twitter client that I put out there for folks to use. Okay, great. That is not the end of the world
compared to other stuff. By the time you're talking about things that are critically important,
yeah, you want to spend money on this and you want to have an actual full-on security team.
But open source tools like this are terrific for folks who are just getting started or they're
building something for fun themselves and, as it turns turns out don't have a full security budget for their weird
late night project i think that there's a beautiful i guess spectrum as far as what level
of investment you can make into security and it's nice to see the innovation continuing happening in
the space you just mentioned that dedicated security companies, they likely have a research team
that's deploying honeypots and seeing what happens to them, right?
Like how are attackers using different vulnerabilities and misconfigurations and what can be done
to mitigate them?
And that ultimately translates into the configurations of the open source tool as well.
So if you're using, for instance, a security scanner that doesn't have an enterprise company with a research team behind it, then you might have different input into the data of that security scanner than if you do.
So these are additional considerations that you might want to take when choosing a scanner.
And also that obviously depends on what scanning you want to do on the size of your company and some other.
This episode is sponsored in parts by our friend EnterpriseDB.
EnterpriseDB has been powering enterprise applications with PostgreSQL for 15 years.
And now, EnterpriseDB has you covered wherever you deploy PostgreSQL,
on-premises, private cloud,
and they just announced a fully managed service on
AWS and Azure called Big Animal. All one word. Don't leave managing your database to your cloud
vendor because they're too busy launching another half dozen managed databases to focus on any one
of them that they didn't build themselves. Instead, work with the experts over at EnterpriseDB.
They can save you time and money. They can even help you migrate
legacy applications, including Oracle, to the cloud. To learn more, try Big Animal for free.
Go to biganimal.com slash snark and tell them Corey sent you. Something that I do find fairly
interesting is that you started off, as you say, doing DevRel in the open source blockchain world.
Then you went to work as an SRE and then went back to doing DevRel style the open source blockchain world. Then you went to work as an SRE
and then went back to doing DevRel style work.
What got you into SRE and what got you out of SRE
other than the obvious, having worked in SRE myself
and being unhappy all the time?
I kid, but what was it that got you into that space
and then out of it?
Yeah, yeah, that's a great question.
And it's, I guess, also what shaped my perspective
on different tools and the user experience
of different tools.
But ultimately, I first worked in the cloud native space for an enterprise tool as developer
advocate.
And I did not like the experience of working for paid solution, doing developer advocacy
for it.
It felt wrong in a lot of ways.
A lot of times you are required to
do marketing work in those situations and that kind of got me out of developer advocacy into
sre work now i was working partially or mainly as sre and then on the side i was doing some
presentations from developer advocacy however that split didn't quite work either and i realized that
the value that i add to a project is really
the way I convey information which I can't do if I'm busy fixing the infrastructure right I can't
convey the information of as much of how the infrastructure has been fixed as I can if I'm
working with an engineering team and then doing developer intricacies only developer intricacies
within that engineering team so how I ultimately got back into developer advocacy was just simply by being reached out
to by my manager at acro security and itai telling me him telling me that he has a role available and
if i want to join his team and it was open source focused given that i started my career for several
years in the working in the open source space
and working with engineers,
contributing to open source tools.
It was kind of what I wanted to go back to,
what I really enjoyed doing.
And yeah, that's how that came about.
For me, I've found that I enjoy aspects
of the technology part,
but I find I enjoy talking to people way more.
I mean, for me, the gratifying moment that keeps me going, believe it or not, is not
necessarily helping giant companies spend slightly less money on another giant company.
It's watching people suddenly understand something that they didn't before.
It's watching the light go on in their eyes.
And that's been addictive to me for a long time.
I've also found that the best way for me to learn something
is to teach someone else.
I mean, the way I learned Git
was that I foolishly wound up proposing a talk,
terrible ideas in Git.
We'll teach it by counterexample
four months before the talk.
And they accepted it.
Crap, I'd better learn enough Git
to give this talk effectively.
I don't recommend this because if you miss the deadline, I check,
they will not move the conference for you.
But there really is something to be said for watching someone learn something
by way of teaching it to them.
That's actually a common strategy for a lot of developer advocates,
of making up a talk and then waiting whether or not it will get accepted. And once it
gets accepted, that's when you start learning the tool and trying to figure it out. Now, it's not a
good strategy, obviously, to do that because people can easily tell that you trusted that for the
conference. Sounds to me like you need to get better at bluffing. I kid, I kid. Don't bluff
your way through conference talks as a general rule.
It tends not to go well.
It's a bad idea.
It's a really bad idea.
And so I ultimately started learning
the technologies
or like the different tools
and projects in the cloud native space.
And there are lots of you
look at the CNCF landscape, right?
But just trying to talk myself
through them on my YouTube channel.
So my early videos on my channel,
it's just very much on the go of me looking for the first time
at somebody's documentation and not making any sense out of them.
It's surprising to me how far that gets you.
I guess I'm always reminded of that Tom Hanks movie from my childhood,
Big, where he wakes up, the kid wakes up as an
adult one day, goes to work, and bluffs his way into working at a toy company. He's in a management
meeting, and just they're showing their new toy they're going to put out there, and he's,
I don't get it. Everyone looks at him like, how dare you say this? No, I don't get it. What's fun
about this? Because he's a kid. And he winds up getting promoted to vice president because,
wow, someone pointed out the obvious thing. And so often it feels like using a tool or a product,
be it open source or enterprise, it is clearly something different in my experience of it when I try to use this thing than the person who developed it. And very often it's, I don't see
the same things or think of the problem space the same way that the developers did.
But also, very often, and I don't mean to call anyone in particular out here, it's a symptom of a terrible user interface or user experience.
What you've just said, a lot of times it's just about saying the thing that nobody either dares to say or nobody has thought of before.
And that gets you, obviously, easier, further than repeating what other people have already mentioned right
and a lot of what you see a lot of times in these also in open source projects but i think more even
in closed source enterprise organizations is that people just repeat whatever everybody else is
saying in the room right you don't have that as much in the open source world because you have
more input or easier input in public than you do
otherwise but it still happens that I mean people are highly similar to each other if you're
contributing to the same project you probably have a similar background similar expertise
similar interests and that will get you to think in a similar way so if there's somebody like also
like a high school student, maybe somebody just graduated
somebody from a completely different industry who's looking at those tools for the first time,
it's like, okay, I know what I'm supposed to do, but I don't understand why I should use this tool
for that. And just pointing that out gets you gets your response most of the time.
I use Twitter and you use YouTube. And obviously obviously I bias more for short, pithy comments
that are dripping in sarcasm, whereas in a long form video, you can talk a lot more about what
you're seeing. But the problem I have with bad user experience, particularly bad developer
experience, is that when it happens to me, and I know at a baseline level that I am reasonably competent in technical spaces,
but when I encounter a bad interface, my immediate instinctive reaction is, oh, I'm dumb,
and this thing is for smart people. And that is never, ever true, except maybe with quantum
computing. Great. Awesome. The Hello World tutorial for that stuff is a PhD from Berkeley.
Good luck if you can get into that. But here in the real world where the rest of us play, it's just a bad
developer experience. But my instinctive reaction is that there's stuff I don't know and I'm not
good enough to use this thing. And I get very upset about that. That's one of the things that
you want to do with any technical documentation is that the first experience that anybody has,
no matter the background, with your tool,
should be a success experience, right?
Like people should look at it, use maybe one command,
do one thing, one simple thing, and be like,
yeah, this makes sense, or like, this is fun to do, right?
Like this first positive interaction,
and it doesn't have to be complex,
and that's what many people, I think, get wrong,
that they try to show off how powerful a tool is.
Like, oh my God, you can do all those things.
It's so exciting, right?
But ultimately, if nobody can use it,
or if most of the people, 99% of the people
who try it for the first time have a bad experience,
it makes them feel uncomfortable or any negative emotion,
then it's really you're approaching it
from the wrong perspective, right?
That's very apt. I think that so much of whether people stick with something long enough to learn
it and find the sharp edges has to do with what their experience looks like. I mean, back when I
was more or less useless when it comes to anything that looked like programming, because I was a
sysadmin type, I started contributing to SaltStack. And what was amazing about that was Tom Hatch, the creator of the project,
had this pattern that he kept up for way too long, where whenever anyone submitted an issue,
he said, great, well, how about you fix it? And because he had a patch, like, well, I'm not good
at programming. He's like, that's okay. No one is. Try it and we'll see. And he accepted every patch.
And then immediately you'd see another patch come in 10 minutes later that fixed the problems
in your patch.
But it was the most welcoming and encouraging experience.
And I'm not saying that's a good workflow for an open source maintainer, but he still
remains one of the best humans I know, just from that perspective alone.
That's amazing.
I think it's really about pointing out that there are different ways of doing open source
and there's no one way to go about it.
So it's really about,
I mean, it's about the community ultimately.
That's what it boils down to.
If you are dependent as an open source project
on the community,
so what is the best experience that you can give them?
If that's something that you want to and can invest in,
then yeah, definitely.
It's probably the best outcome for everybody.
I do have one more question, specifically around things that are
more timely. Taking a quick look at Trivi and recent
features, it seems like you just now, now-ish, started supporting
cloud scanning as well. Previously, it seems like you just now, now-ish, started supporting cloud scanning as well.
Previously, it was effectively, oh, this scans configuration and containers. Okay, great. Now
you're targeting actually scanning cloud providers themselves. What does this change and what brought
you to this place as someone who very happily does not deal with AWS? Yeah, totally. So I just
started using AWS specifically to showcase this feature.
So if you look at the Aqua OpenSource YouTube channel, you will find several tutorials that
show you how to use that feature among others. Now, what I mentioned earlier in the podcast
already is that Trivi is really versatile. It allows you to scan different aspects of your
stack at different stages of your development lifecycle. And that's made possible because
Trivi is ultimately using
different open source projects under the hood.
For example, if you want to scan your infrastructure's
code misconfigurations, it's using a tool called TFSEC,
specifically for Terraform, and then other tools
for other scanning, for other security scanning.
Now, we have, or had, it's gonna be probably deprecated,
a tool called CloudSploit in the
Aqua open source project suite.
Now that's going to kind of like the functionality that CloudSploit was providing is going to
get converted to become part of Trivi.
So everything scanning related is going to become part of Trivi that really like once
you understand how Trivi works and all of the CLI commands in Trivy they
have exactly the same structure it's really easy to scan from container images to infrastructure's
code to generating s-bombs to scanning also now your cloud infrastructure and Trivy can scan any
of your AWS services for misconfigurations and it's using basically the AWS client under the hood
to connect with the services, everything you have set up there, and then give you the list
of misconfigurations. And once it has done the scan, you can then drill down further into the
different aspects of your misconfigurations without performing the entire scan again,
since you likely have lots and lots of resources so you wouldn't want to scan
them every time again right you perform the scan so once something has been scanned trivia will
know whether the resource changed or not it won't scan it again that's the same way that in-class
scanning works right now once a container image has been scanned for vulnerabilities it won't
scan the same container image again because that would just waste time. So yeah, do check it
out. It's our most recent feature and it's going to come out also to the other cloud providers out
there. But we're starting with AWS and this kind of forced me to finally look at it for the sake of
it. But I'm not going to be happy. No, I don't think anyone is. Every time I see on a resume
that someone says, oh, I'm an expert in AWS, it's, no, you're not. They have 400 some odd services
now. We have crossed the point long ago where I can very convincingly talk about AWS services that
do not exist to Amazonians and not get called out for it. Because who in the world knows what they run?
And half of their services sound like something I made up to be funny,
but they're very real.
It's wild to me that it is as sprawling as it is,
and apparently continues to work as a viable business.
But no one knows all of it,
and everyone feels confused, lost, and overwhelmed
every time they look at the AWS console.
This has been my entire
career in life for the last six years, and I still feel that way, so I'm sure everyone else does too.
And this is how misconfigurations happen, right? You're confused by what you're actually supposed
to do and how you're supposed to do it. And that's, for example, with all the access rights
in Google Cloud, something that I'm very familiar with that completely overwhelms you,
and you get super frustrated by it, and you don't even know what you give access to it's like if you've ever had
to configure discord user roles it's a similar disaster you will not know which user has access
to which they kind of changed it and tried to improve it over the past year but it's a similar
issue that you face in cloud providers,
just on a much larger scale,
not just on one chat channel.
I think that is probably a fair place to leave it.
I really want to thank you
for spending as much time with me as you have
talking about the trials and travails of,
well, this industry, for lack of a better term.
If people want to learn more, where's the best place to find you?
So I have a weekly DevOps newsletter on my blog, which is AnaisURL, like how you spell URL,
and then.com, AnaisURL.com. That's where I have all the links to my different channels,
to all of the resources that I publish,
where you can find out more as well.
So that's probably the best place.
Yeah.
And we will, of course, put a link to that into the show notes.
I really want to thank you for being as generous with your time as you have been.
Thank you.
Thank you for having me.
It was great.
Anais, open source developer advocate at Aqua Security.
I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice,
along with an angry, insulting comment that I will never see
because it's buried under a whole bunch of minor or false positive vulnerability reports.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started. this has been a humble pod production
stay humble