Screaming in the Cloud - Snyk and the Complex World of Vulnerability Intelligence with Clinton Herget
Episode Date: November 17, 2022About ClintonClinton Herget is Field CTO at Snyk, the leader is Developer Security. He focuses on helping Snyk's strategic customers on their journey to DevSecOps maturity. A seasoned technno...logist, Cliton spent his 20-year career prior to Snyk as a web software developer, DevOps consultant, cloud solutions architect, and engineering director. Cluinton is passionate about empowering software engineering to do their best work in the chaotic cloud-native world, and is a frequent conference speaker, developer advocate, and technical thought leader.Links Referenced:Snyk: https://snyk.io/duckbillgroup.com: https://duckbillgroup.com
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is brought to us in part by our friends at Pinecone.
They believe that all anyone really wants is to be understood,
and that includes your users.
AI models combined with the Pinecone Vector Database
let your applications understand and act on what your users. AI models combined with the Pinecone Vector Database let your applications understand
and act on what your users want without making them spell it out. Make your search application
find results by meaning instead of just keywords. Your personalization system make picks based on
relevance instead of just tags. And your security applications match threats by resemblance
instead of just regular expressions.
Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable.
Thanks to my friends at Pinecone for sponsoring this episode.
Visit pinecone.io to understand more.
This episode is brought to you in part by our friends at Veeam.
Do you care about backups?
Of course you don't.
Nobody cares about backups.
Stop lying to yourselves. You care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves.
You care about restores,
usually right after you didn't care enough about backups.
If you're tired of the vulnerabilities,
costs, and slow recoveries
when using snapshots to restore your data,
assuming that you even have them at all,
living in AWS land,
there's an alternative for you.
Check out Veeam. That's V-E-E-A-M
for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore.
Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this
ridiculous podcast. Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the fun
things about establishing traditions is that the first time you do it, you don't really know that
that's what's happening. Almost exactly a year ago, I sat down for a previous promoted guest
episode, much like this one, with Clinton Hurgett at Snyk, or Cynic, however you want to pronounce that.
He is apparently a scarecrow of some sorts, because when last we spoke, he was a principal
solutions engineer, but like any good scarecrow, he was outstanding in his field, and now, as a
result, is a field CTO. Clinton, thanks for coming back, and let me start by congratulating you on
the promotion, or consoling you, depending upon how good or bad it is.
You know, Corey, a little bit of column A, a little bit of column B, but very glad to
be here again, and frankly, I think it's because you insist in mispronouncing sneak as cynic,
and so you get me again.
Yeah, you could add a couple of new letters to it and just call the company Cynac.
Now it's a hard pivot to a networking company, so there's always options.
I acknowledge what you did there, Corey.
I like that quite a bit. I wasn't sure you'd get it.
I'm a nerd going way, way back. So we'll have to go pretty deep in the stack for you to stun me on
some of this stuff.
As we did with you, I wasn't sure you'd get it. See, that one sailed right past you and I win.
Chalk another one up for me in the networking pun wars. Great. We'll loop back for that later.
I don't even know where I am right now. So let's go back to a question that one would think that I'd already established
a year ago, but I have the attention span of basically a goldfish. Let's not kid ourselves.
So as I'm visiting the Snyk website, I find that it says different words than it did a year ago,
which is generally a sign that is
positive when nothing's been updated, including the copyright date. Things are going really well
or really badly, one wonders. But no, now you're talking about Sneak Cloud. You're talking about
several other offerings as well. And my understanding of what it is you folks do
no longer appears to be completely accurate. So let me be direct. What the hell do you folks do
over there?
It's a really great question. Glad you asked me on a year later to answer it. I would say at a very high level, what we do hasn't changed. However, I think the industry has certainly come
a long way in the past couple of years, and our job is to adapt to that. Snyk, again,
pronounced like a pair of sneakers or sneaking around. It's a developer security platform. So
we focus on
enabling the people who build applications, which as of today means modern applications built in the
cloud, to have better visibility and ultimately a better chance of mitigating the risk that goes
into those applications when it matters most, which is actually in their workflow. Now, you're
exactly right. Things have certainly expanded in that remit because the job of a software engineer is very different, I think, this year than it even was last year. And that's continually evolving over time. I, as a developer now, I'm doing a lot more than I was doing a few years ago. And one of the things I'm doing is building infrastructure in the cloud. I'm writing YAML files. I'm writing cloud formation templates to deploy things out to AWS. And what happens in the cloud has a lot to do with the risk to my organization associated with those
applications that I'm building. So I'd love to talk a little bit more about why we decided to
make that move, but I don't think that represents a watering down of what we're trying to do at
Snyk. I think it recognizes that that developer security vision fundamentally can't exist without
some understanding of what's
happening in the cloud. One of the things that always scares me is, and sets the spidey sense
tingling, is when I see a company who has a product and I'm familiar-ish with what they do,
and then they take their product name and slap the word cloud at the end, which is almost always
coded to, okay, so we took the thing that we sold in boxes and data centers, and now we're making a
shitty hosted version available
because it turns out you rubes will absolutely
pay a subscription for it.
Yeah, I don't get the sense that that at all is
what you're doing. In fact, I don't believe that
you're offering a hosted
managed service at the moment, are you?
No, the cloud part, that fundamentally
refers to a
new product, an offering that looks
at the security or potentially the
risks being introduced into cloud infrastructure by now the engineers who are doing it, who are
writing infrastructure as code. We had previously had an infrastructure as code security product,
and that served alongside our static analysis tool, which is Snyk Code, our open source tool,
our container scanner, recognizing that the kinds of vulnerabilities
you can potentially introduce in writing cloud infrastructure are not only bad to the organization
on their own. I mean, nobody wants to create an S3 bucket that's wide open to the world,
but also those misconfigurations can increase the blast radius of other kinds of vulnerabilities in
the stack. So I think what it does is it
recognizes that, as you and I think your listeners well know, Corey, there's no such thing as the
cloud, right? The cloud is just a bunch of fancy software designed to abstract away from the fact
that you're running stuff on somebody else's computer, right? Unfortunately, in this case,
the fact that you're calling it Sneak Cloud does not mean that you're doing what so many other
companies in that same space do and would have led to a really short interview because I have
no faith that it's the right path forward, especially for
you folks, where it's, oh, you want to be secure. You've got to host your stuff on our stuff
instead. That's why we called it cloud. That's the direction that I've seen a lot of folks try
and pivot in. And I always find it disastrous. It's, yeah, well, at Snyk, if we run your code,
your shitty applications here in our environment, it's going to be safer than if you run it yourself on something untested like AWS. Yeah, those stories hold absolutely no water.
And may I just say I'm gratified that's not what you're doing.
Absolutely not. No, I would say we have no interest in running anyone's applications.
We do want to scan them though, right? We do want to give the developers insight into
the potential misconfigurations, the risks, the vulnerabilities that you're introducing. What sets Snyk apart, I think, from others in that application and
security testing space is we focus on the experience of the developer rather than just
being another tool that runs and generates a bunch of PDFs and then throws them back to say,
here's everything you did wrong. We want to say to developers, here's what you could do better.
Here's how that default in a CloudFormation template that leads to your bucket being wide open on the internet
could be changed, right? Here's the remediation that you could introduce. And if we do that at
the right moment, which is inside that developer workflow, inside the IDE on their local machine
before that gets deployed, there's a much greater chance that remediation is going to be implemented
and it's going to happen much more cheaply, right? Because you no longer have to do the round trip
all the way out to the cloud and back. So the cloud part of it fundamentally means completing
that story, recognizing that once things do get deployed, there's a lot of valuable context that's
happening out there that a developer can really take advantage of. They can say, wait a minute, not only do I have a log for shell vulnerability,
right, in one of my open source dependencies, but that artifact,
that application is actually getting deployed to a VPC that has ingress from the internet, right?
So not only do I have remote code execution in my application,
but it's being put in
an enclave that actually allows it to be exploited.
You can only know that if you're actually
looking at what's really happening in the Cloud.
So not only does Snyk Cloud allow us to provide
an additional layer of security by looking at what's
misconfigured in that Cloud environment and help
your developers make remediations by saying,
here's the actual IAC file that caused that infrastructure to come into existence.
But we can also say, here's how that affects the risk of other kinds of vulnerabilities
at different layers in the stack, right?
Because it's all software.
It's all connected.
Very rarely does a vulnerability translate one-to-one into risk, right?
They're compound because modern software
is compound. And I think what developers lack is the tooling that fits into their workflow,
that understands what it means to be a software engineer and actually helps them make better
choices rather than punishing them after the fact for guessing and making bad ones.
That sounds awesome at a very high level. It is very aligned with how executives and
decision makers think about a lot of these things. Let's get down to brass tacks for a second.
Assume that I am the type of developer that I am in real life, by which I mean shitty.
What am I going to wind up attempting to do that Snyk will flag and, in other words,
protect me from myself and warn me that I'm about to commit a dump?
First of all, I would say, look, there's no such thing as a non-shitty developer, right? And I
built software for 20 years, and I decided that's really hard. What's a lot easier is talking about
building software for a living. So that's what I do now. But fundamentally, the reason I'm at
Snyk is I want to help people who are in the kinds of jobs that I had for a very long time,
which is to say you have a tremendous amount of anxiety because you recognize that the success of the organization rests on your shoulders.
And you're making hundreds, if not thousands of decisions every day without the right context to understand fully how the results of that decision is going to affect the organization that you work for.
So I think every developer in the world has to deal with this constant cognitive dissonance of saying, I don't know that this is right, but I have to do it anyway because I need to clear that
ticket because that release needs to get into production. And it becomes really easy to
short-sightedly do things like pull an open source dependency without checking whether it has any CVEs associated with it,
because that's the version that's easiest to implement
with your code that already exists.
So that's one piece.
Sneak open source designed to traverse that entire tree
of dependencies in open source all the way down,
all the hundreds and thousands of packages that you're pulling in
to say, not only here's a vulnerability that you should really
know is going to end up in your application when it's built,
but also here's what you can do about it.
Here's the upgrade you can make,
here's the minimum viable change
that actually gets you out of this problem,
and to do so when it's in the right context,
which is in as you're making that decision for the first time,
inside your developer environment.
That also applies to things like container vulnerabilities, right?
I have even less visibility
into what's happening inside a container
than I do inside my application.
Because I know, say, I'm using an Ubuntu
or a Red Hat base image.
I have no idea what are all the Linux packages
that are on it, let alone
what are the vulnerabilities associated with them, right?
So being able to detect,
I've got a version of OpenSSL 3
that has a potentially serious vulnerability
associated with it
before I've actually deployed that container
out into the cloud
very much helps me as a developer
because I'm limiting the rework
or the refactoring I would have to do
by otherwise assuming I'm making a safe choice
or guessing at it
and then only finding out
after I've written a bunch more code that relies on that decision that I have to go back and change
it and then rewrite all of the things that I wrote on top of it, right? So it's the identifying the
layer in the stack where that risk could be introduced and then also seeing how it's affected
by all of those other layers because modern software is inherently complex. And that
complexity is what drives both the risk associated with it and also things like efficiency, which I
know your audience is, for good reason, very concerned about. I'm going to challenge you on
an aspect of this because on the tin, the way you describe it, it sounds like, oh, I already have
something that does that. It's the GitHub Dependabot story where it winds up sending me a litany of complaints every week. And we are talking,
if I did nothing other than read this email in that day, that would be a tremendously efficient
processing of that entire thing. Because so much of it is stuff that is ancient and archived and the specific aspects
of the vulnerabilities are just not relevant. And you talk about the OpenSSL 3 issues that
just recently came out. I have no doubt that somewhere in the most recent email I've gotten
from that thing, it's buried two-thirds of the way down. Like, all the complaints, like,
the dishwasher isn't loaded, you forgot to take the trash out, that baby needs a change,
the kitchen is on fire, and the vacuuming in the... Wait, wait, what was that thing about the kitchen?
Seems like one of those things is not like the others, and it just gets lost in the noise.
Now, I will admit to putting my thumb a little bit on the scale here, because I've used sneak
before myself, and I know that you don't do that. How do you avoid that trap? Great question. And I think
really the key to the story here is developers need to be able to prioritize. And in order to
prioritize effectively, you need to understand the context of what happens to that application
after it gets deployed. And so this is a key part of why getting the data out of the cloud and
bringing it back into the code is so important.
For example, take an OpenSSL vulnerability.
Do you have it on a container image you're using?
That's question number one.
Question two is, is there actually a way
that that code can be accessed from the outside?
Is it included or is it called?
Is the method activated by
some other package that you have running on that container?
Is that container image actually used in
a production deployment or does it just go
sit in a registry and no one ever touches it?
What are the conditions required
to make that vulnerability exploitable?
You look at something like Spring Shell, for example.
Yes, you need a certain version of
Spring Beans in a jar file somewhere, but you also need to be running a certain version of Tomcat,
and you need to be packaging those jars inside a war in a certain way.
Exactly. I have a whole bunch of Lambda functions that provide the pipeline system that I use to
build my newsletter every week, and I get screaming concerns about issues in, for example,
a version of the Markdown parser that I've subverted? Yeah, sure, I get that on some level if I were just giving it random untrusted input
from the internet and random ad hoc users, but I'm not. It's just me when I write things for
that particular Lambda function, and I'm not going to be actively attempting to subvert the thing that
I built myself and no one else should have access to. And looking through the details of some of these things, it doesn't even apply to the way that I'm calling the
libraries. So it's just noise, for lack of a better term. It is not something that basically
ever needs to be adjusted or fixed. Exactly. And I think cutting through that noise is so key to
creating developer trust in any kind of tool that's scanning an asset and providing you
what in theory are a list of actionable steps, right? I need to be able to understand
what is the thing, first of all. There's a lot of tools that do that, right? And we tend to
mock them by saying things like, oh, it's just another PDF generator. It's just another thousand
pages that you're never going to read. So getting the information in the right place is a big part
of it. But filtering out all of the noise by saying we looked at not just one layer of the stack, but multiple layers, right?
We know that you're using this open source dependency, and we also know that the method that contains the vulnerability is actively called by your application in your first party code because we ran our static analysis tool against that.
Furthermore, we know because we looked at your cloud context,
we connected to your AWS API,
we're big partners with AWS
and very proud of that relationship.
But we can tell that there's inbound internet access
available to that service, right?
So you start to build a compound case
that maybe this is something
that should be prioritized, right?
Because there's a way in to the asset from the outside world. There's a way into the vulnerable
function through the labyrinthine spaghetti of my code to get there. And the conditions required to
exploit it actually exist in the wild. But you can't just run a single tool. You can't just run
Dependabot to get that prioritization.
You actually have to look at the entire
holistic application context,
which includes not just your dependencies,
but what's happening in the container,
what's happening in your first part of your proprietary code,
what's happening in your IAC,
and I think most importantly,
for modern applications,
what's actually happening in the Cloud once it gets deployed.
That's the holy grail of completing that loop to bring the right context back from the cloud into
code to understand what change needs to be made and where, and most importantly, why, because it's a
priority that actually translates into organizational risk, to get a developer to pay attention, right?
I mean, that is the key to, I think, any security concerns.
How do you get engineering mindshare and trust that this is actually what you should be paying attention to
and not a bunch of rework
that doesn't actually make your software more secure?
One of the challenges that I see across the board
is that, well, let's back up a bit here.
I have, in previous episodes,
talked in some depth about my position that when it comes
to the security of various cloud providers, Google is number one and AWS is number two.
Azure is a distant third because it figures out what brands taste the best. I don't know.
But the reason is not because of any inherent attribute of their security models, but rather
that Google massively simplifies an awful lot of what happens.
It automatically assumes that resources in the same project should be able to talk to one another,
so I don't have to painstakingly configure that. In AWS land, all of this must be done explicitly.
No one has time for that, so we overscope permissions massively and never go back and
rein them in. It's a configuration vulnerability more than an underlying inherent weakness to the platform,
because complexity is the enemy of security in many respects. If you can't fit it all in your head to reason about it, how can you understand the security ramifications of it? AWS offers a
tremendous number of security services. Many of them, when taken in the sum totality of their
pricing, cost more than any breach they could be expected to prevent.
Adding more stuff that adds more complexity
in the form of Snyk sounds like it's the exact opposite
of what I would want to do.
Change my mind.
I would love to.
I would say, fundamentally, I think you and I,
and by I, I mean Snyk and, you know,
Corey Quinn Enterprises Limited,
I think we fundamentally have the same enemy here,
which is the cyclomatic complexity of software, which is how many different pathways do the bits
have to travel down to reach the same endpoint, the same goal? The more pathways there are,
the more risk is introduced into your software, and the more inefficiency is introduced.
Now, I know you love to talk about how many different ways is there to run a container on AWS, right? It's either 30 or 400
or 11 million. I think you're exactly right that that complexity, it is great for, first of all,
selling cloud resources, but also I think for innovating, right? For building new kinds of
technology on top of that platform. The cost that comes along with that is a lack of visibility.
And I think we are just now, as we approach the end of 2022 here,
coming to recognize that fundamentally the complexity of modern software
is beyond the ability of a single engineer to understand.
And that is really important from a security perspective,
from a cost control
perspective, especially because software now creates its own infrastructure, right? You can't
just now secure the artifact and secure the perimeter that it gets deployed into and say,
I've done my job, right? Nobody can bridge the perimeter and there's no vulnerabilities in the
thing because we scanned it. And that thing is immutable forever because it's pets, not cattle.
Where I think the complexity
story comes in is to recognize like, hey, I'm deploying this based on a quick start or cloud
formation template that is making certain assumptions that make my job easier, right?
In a very similar way that choosing an open source dependency makes my job easier as a developer
because I don't have to write all of that code myself. But what it does mean is I lack the
visibility into,
well, hold on, how many different pathways are there
for getting things done inside this dependency?
How many other dependencies are brought on board
in the same way that when I create an EKS cluster,
for example, from a CloudFormation template,
what is it creating in the background?
How many VPCs are involved?
What are the subnets, right?
How are they connected to each other?
Where are the potential ingress points?
So I think fundamentally getting visibility
into that complexity is step number one,
but understanding those pathways
and how they could potentially translate into risk
is critically important.
But that prioritization has to involve
looking at the software holistically
and not just
at individual layers, right?
I think we lose when we say we ran a static analysis tool and an open source dependency
scanner and a container scanner and a cloud config checker, and they all came up green.
Therefore the software doesn't have any risks, right?
That ignores the fundamental complexity in that all of these layers are connected together. And from an adversary's perspective, if my job is to go in and exploit software that's
hosted in the cloud, I absolutely do not see the application model that way.
I see it as it is inherently complex.
And that's a good thing for me because it means I can rely on the fact that those engineers
had tremendous anxiety, were making a lot of guesses and crossing their fingers and hoping something would work
and not be exploitable by me, right?
So the only way I think we get around that is to recognize that our engineers are critical
stakeholders in that security process.
And you fundamentally lack that visibility if you don't do your scanning until after
the fact. If you take that traditional
audit-based approach that assumes a very waterfall legacy approach to building software and recognize
that, hey, we're all on this infinite loop racetrack now. We're deploying every three and a half
seconds. Everything's automated. It's all built at scale. But the ability to do that inherently
implies all of this additional complexity that ultimately will, you know, end up haunting me, right?
If I don't do anything about it to make my engineers stakeholders in, you know, what actually gets deployed and what risks it brings on board.
This episode is sponsored in part by our friends at Optics.
Attackers don't think in silos.
So why would you have siloed solutions protecting cloud, containers, and laptops distinctly?
Meet Uptix, the first unified solution that prioritizes risk across your modern attack surface, all from a single platform, UI, and data model.
Stop by booth 3352 at AWS reInvent in Las Vegas to see for yourself and visit uptix.com. That's
U-P-T-Y-C-S dot com. My thanks to them for sponsoring my ridiculous nonsense.
When I wind up hearing you talk about this, I'm going to divert us a little bit because you're
dancing around something that it took me a long time to learn.
When I first started fixing AWS bills for a living, I thought that it would be mostly math, by which I mean arithmetic.
That's the great secret of cloud economics.
It's addition, subtraction, and occasionally multiplication and division.
Now, that turns out it's much more psychology than it is math.
You're talking in many aspects about, I guess, what i'd call the psychology of a modern cloud engineer
and how they think about these things it's not a technology problem it's a people problem isn't it
oh absolutely i think it's the people that create the technology and i think the longer
you persist in what we would call the legacy viewpoint, right? Not recognizing what the cloud is, which is
fundamentally just software all the way down, right? It is abstraction layers that allow you
to ignore the fact that you're running stuff on somebody else's computer. Once you recognize that,
you realize, oh, if it's all software, then the problems that it introduces are software problems
that need software solutions, which means that it must involve activity by the
people who write software, right? So now that you're in that developer world, it unlocks, I
think, a lot of potential to say, well, why don't developers tend to trust the security tools
they've been provided with, right? I think a lot of it comes down to the question you asked earlier
in terms of the noise, the lack of understanding of how those pieces are connected together, or the lack
of context. We're not even, frankly, caring about looking beyond the single point solution,
the problem that solution was designed to solve. But more importantly than that, not recognizing
what it's like to build modern software, right? All of the decisions that have to be made on a
daily basis with very limited information, right. I might not even understand where that container image I'm building is going in the universe, let alone what's being built on top of it and how much critical customer data is being touched by the database that that container now has the credentials to access, right? So I think in order to change anything,
we have to back way up and say, problems in the cloud are software problems, and we have to treat
them that way. Because if we don't, if we continue to represent the cloud as some evolution of the
old environment, where you just have this perimeter that's pre-existing infrastructure
that you're deploying things onto, and there's a guy with a neckbeard in the basement who is unplugging cables from a switch and
plugging them back in, and that's how networking problems are solved. I think you miss the idea that
all of these abstraction layers introduce the very complexity that needs to be solved back in the
build space. But that requires visibility into what actually happens when it gets deployed.
The way I tend to think of it is there's this firewall in place.
Everybody wants to say, you know, we're doing DevOps, we're doing DevSecOps, right?
And that's a lie 100% of the time, right?
No one is actually, I think, adhering completely to those principles.
That's why one of the core tenets of ClickOps is lying about doing anything in the console.
Absolutely, right?
And that's why shadow IT becomes more and more prevalent
the deeper you get into modern development,
not less and less prevalent,
because it's fundamentally hard to recognize
the entirety of the potential implications, right,
of a decision that you're making.
So it's a lot easier to just go in the console and say,
okay, I'm going to deploy one EC2 to do this.
I'm going to get it right at some point.
And that's why every application
that's ever been produced by human hands
has a comment in it that says something like,
I don't know why this works, but it does.
Please don't change it.
And then three years later,
because that developer has moved on to another job,
someone else comes along and looks at that comment and says,
that should really work.
I'm going to change it.
And they do, and everything fails,
and they have to go back and fix it the original way. and then add another comment saying, hey, this person above me,
they were right. Please don't change this line. I think every engineer listening right now knows
exactly where that weak spot is in the applications that they've written, and they're terrified of
that. And I think any tool that's designed to help developers fundamentally has to get into the mindset, get
into the psychology of what that is like, of not fundamentally being able to understand what those
applications are doing all of the time, but having to write code against them anyway, right? And
that's what leads to, I think, the fear that you're going to get woken up because your pager is going
to go off at 3 a.m. because the building is literally on fire and it's because of code that you wrote.
We have to solve that problem and it has to be those people whose psychology we get into to
understand how are you working and how can we make your life better, right? And I really do think it
comes with that, the noise reduction, the understanding of complexity, and really just
being humble and saying like, we get that this job is really hard and that the only way it gets better is to begin
admitting that to each other. I really wish that there were a better way to articulate a lot of
these things. This is the reason that I started doing a security newsletter. It's because cost
and security are deeply aligned in a few ways. One of them is that you care about them a lot right after you failed to care about them sufficiently. But the other is
that you've got to build guardrails in such a way that doing the right thing is easier than doing it
the wrong way, or you're never going to gain any traction. I think that's absolutely right. And you
used a key term there, which is guardrails. And I think that's where, in their heart of hearts, that's where every security professional wants to be,
right? They want to be defining policy. They want to be understanding the risk posture of
the organization and nudging it in a better direction, right? They want to be talking up
to the board, to the executive team, and creating confidence in that risk posture,
rather than talking down or
off to the side, depending on how that org chart looks, to the engineers and saying,
fix this, fix that, and then fix this other thing, A, B, and C, right? I think the problem is that
everyone in a security role or an organization of any size at this point is doing 90% of the latter
and only about 10% of the former, right? They're acting as gatekeepers, not as guardrails. They're not defining policy. They're spending all of their time creating Jira
tickets and all of their time tracking down who owns the piece of code that got deployed to this
pod on EKS that's throwing all of these errors on my console. And how can I get the person to
make a decision to actually take an action that stops these notifications
from happening, right? So all they're doing is throwing footballs down the field without knowing
if there's a receiver there, right? And I think that takes away from the job that our security
analysts really should be doing, which is creating those guardrails, which is having confidence that
the policy they set is readily understood by the developers making decisions, and that that's happening in
an automated way without them having to create friction by bothering people all the time.
I don't think security people want to be hated by the development teams that they work with,
but they are. And the reason they are is I think fundamentally we lack the tooling.
They are the barrier method.
Exactly. And we lack the processes to get the right intelligence in a way that's consumable by the engineers when they're doing their job and not after the fact, which is typically when the security people have done their jobs.
It's sad, but true. I wish that there were a better way to address these things. And yet, here we are. If only there were a better way to address these things. Look, I wouldn't be here
at Snyk if I didn't think there were a better way. And I wouldn't be coming on shows like yours
to talk to the engineering communities, people who have walked the walk, who have built those
Terraform files that contain these misconfigurations, not because they're bad people or because they're
lazy or because they don't do their jobs well, but because they lack the visibility. They didn't have the understanding that that default is
actually insecure, because how would I know that otherwise, right? I'm building software.
I don't see myself as an expert on infrastructure, right, or on Linux packages or on cyclomatic
complexity or on any of these other things. I'm just trying to stay in my lane and do my job.
It's not my fault that the software has become too complex for me to understand, right? But my
management doesn't understand that. And so I constantly have white knuckles worrying that,
you know, the next breach is going to be my fault. So I think the way forward really has to be,
how do we make our developers stakeholders in the risk being introduced by the software they write to
the organization. And that means everything we've been talking about. It means prioritization.
It means understanding how the different layers of the stack affect each other, especially the
cloud pieces. It means an extensible platform that lets me write code against it to inject my own
reasoning, right? The piece that we haven't talked about here is
that that risk calculation doesn't just involve technical aspects. There's also business
intelligence that's involved, right? What are my critical applications, right? What actually
causes me to lose significant amounts of money if those services go offline? We at Snyk can't
tell that. We can't run a scanner to say, these are your crown jewel services that can't ever go down
but you can know that as an organization so where we're going with the platform is opening up the
extensible process creating apis for you to be able to affect that risk triage right so that as
the creators of guardrails as the security team you are saying here's how we want our developers
to prioritize here are all of the
factors that go into that decision making. And then you can be confident that in their environment,
back over in developer land, when I'm looking at IntelliJ or on my local command line,
I am seeing the guardrails that my security team has set for me. And I'm confident that I'm fixing
the right thing. And frankly, I'm grateful because I'm fixing it at the right time and I'm doing it in
such a way and with a tool set that actually is helping me fix it rather than just telling me I've
done something wrong, right? Because everything we do at Snyk focuses on identifying the solution,
not necessarily identifying the problem. It's great to know that I've got an unencrypted S3
bucket, but it's a whole lot better if you give me the line of code and tell me exactly where I have to copy and paste it so I can go on to the next thing,
rather than spending an hour trying to figure out where I put that line and what I actually
have to change it to. I often say that the most valuable currency for a developer, for a software
engineer, it's not money, it's not time, it's not compute power or anything like that.
It's the right context, right? I actually have to understand what are the implications of the
decision that I'm making. And I need that to be in my own environment, not after the fact,
because that's what creates friction within an organization is when I could have known earlier,
and I could have known better. But instead, I to guess, I had to write a bunch of code
that relies on the thing that was wrong
and now I have to redo it all for no good reason
other than the tooling just hadn't adapted
to the way modern software is built.
So one last question
before we wind up calling it a day here.
We are now heavily into what I will term pre-invent,
where we're starting to see a whole bunch of announcements
come out of the AWS universe in preparation for what I'm calling crappy Cloud Hanukkah this year,
because I'm spending eight nights in Las Vegas. What are you doing these days with AWS specifically?
I know I keep seeing your name in conjunction with their announcements, so there's something
going on over there. Absolutely. No, we're extremely excited
about the partnership between Snyk and AWS.
Our vulnerability intelligence
is utilized as one of the data sources
for AWS Inspector,
particularly around open source packages.
We're doing a lot of work around things
like the code suite,
building Snyk into code pipeline,
for example, to give developers using that code
suite earlier visibility into those vulnerabilities. And really, I think the story kind of
expands from there, right? So we're moving forward with Amazon, recognizing that it is, you know,
sort of the de facto when we say cloud, very often we mean AWS. So we're going to have a tremendous
presence at reInvent this year. I'm going to be there as well. I think we're actually going to have a bunch of handouts with your face on them, is my understanding.
So please stop by the booth.
Would love to talk to folks, especially because we've now released the Snyk Cloud product and really completed that story.
So anything we can do to talk about how that additional context of the cloud helps engineers because it's all software all the way down.
Those are absolutely conversations we want to be having. Excellent. And we will, of course,
put links to all of these things in the show notes so people can simply click and there they are.
Thank you so much for taking all this time to speak with me. I appreciate it.
All right. Thank you so much, Corey. Hope to do it again next year.
Clinton Hurgett, field CTO at Snyk. I'm cloud economist Corey Quinn, and this
is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your
podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star
review on your podcast platform of choice, along with an angry comment telling me that I'm being
completely unfair to Azure, along with your
favorite tasting color of cramp. If your AWS bill keeps rising and your blood pressure is doing the
same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying.
The Duckbill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started. this has been a humble pod production stay humble