Screaming in the Cloud - “Snyk”ing into the Security Limelight with Clinton Herget
Episode Date: December 2, 2021About ClintonClinton Herget is Principal Solutions Engineer at Snyk, where he focuses on helping our large enterprise and public sector clients on their journey to DevSecOps. A seasoned techn...ologist, Clinton spent his 15+ year career prior to Snyk as a web software engineer, DevOps consultant, cloud solutions architect, and technical director in the systems integrator space, leading client delivery of complex agile technology solutions. Clinton is passionate about empowering software engineers and is a frequent conference speaker, developer advocate, and everything-as-code evangelist.Links:Try Snyk for free today at: https://snyk.co/Screaming-in-the-Cloud
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This episode is sponsored in part by my friends at Thinks Canary.
Most companies find out way too late that they've been breached and Thinks Canary changes this.
And I love how they do it.
Deploy Canaries and Canary tokens in minutes and then forget about them.
What's great is that then attackers tip their hand by touching them,
giving you one alert when it matters.
I use it myself and I only remember this when I get the weekly update with a
we're still here so you're aware from them.
It's glorious.
There's zero admin overhead to this.
There are effectively no false positives unless I do something foolish.
Canaries are
deployed and loved on all seven continents you can check out what people are saying at canary.love
and their kube config canary token is new and completely free as well you can do an awful lot
without paying them a dime which is one of the things i love about them. It's useful stuff and not a, oh, I wish I had money. No, it is
spectacular. Take a look. That's canary.love because it's genuinely rare to find a security
product that people talk about in terms of love. It's really just a neat thing to see. Canary.love,
thank you to Thinks Canary for their support of my ridiculous, ridiculous nonsense.
Writing ad copy to fit in a 30-second slot is hard, but if anyone can do it, the folks at Quali can. Just like their Torque infrastructure automation platform can deliver complex
application environments anytime, anywhere, in just seconds instead of hours, days, or weeks.
Visit qtorque.io today to learn how you can spin up
application environments in about the same amount of time as it took you to listen to this ad.
Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode features Clinton
Hurgett, who's a principal solutions engineer at Snyk or Snyk or Cynic. Clinton,
thank you for joining me. How the heck do I pronounce your company's name?
That is always a great place to start, Corey. And we like to say it is Snyk as in sneaking around
or a pair of sneakers. Now, our colleagues in the UK do like to say Snyk, but that is because
they speak incorrectly. We will accept it. It is still
wrong, as long as you're not saying SYNC, because it really has nothing to do with plumbing, and we
prefer to avoid that association. Generally speaking, I try not to tell other people how
to run their business, but I will make an exception here because I can't take it anymore.
According to Crunchbase, your company has raised $1.4 billion. By a vowel, for God's sake,
how much could it possibly cost
for a single letter that clarifies all of this?
My God.
Yeah, but then we wouldn't spend the first 20 minutes
of every sales conversation
talking about how to pronounce the company name,
and we would need to fill that with content.
So I think we're just going to stay the course
from here on out.
I like that.
So you're a principal solutions engineer.
First, what does
that do? And secondly, I've known an awful lot of folks who I would consider problem engineers,
but they never self-describe that way. It's always solutions oriented.
Well, that's because I work for Snyk and we're not a problems company. Corey,
we're a solutions company. I like that. It's an interesting role, right? Because I work with some
of our biggest customers, a lot of our strategic partners here in North America. And I'm kind of the evangelist that comes out and says, hey, here's what sucks about being
a developer. Here's how we could maybe be better. And I want to connect with other engineers to say,
look, I share your pain. There might be an easier way if you, you know, give me a few minutes here
to talk about Snyk. So I've seen Snyk around for a while. I've had a few friends who've worked
there almost since the beginning, and they talk about this thing.
This was before I believe you had the Doberman logo
back in the early days.
And I keep periodically seeing you folks
in a variety of different contexts and different places.
Often I'll be installing something from Docker Hub,
for example, and it will mention that,
oh, there's a Snyk scan thing that has happened
on the command line, which is interesting
because I, to the best of my knowledge, don't pay Docker for things that I do
because, no, I'm going to build it myself out of Popsicle sticks. It's sort of my entire engineering
ethos. But I keep seeing you in different cases where, as best I am aware, I have never paid you
folks for services. What is it you do as a company? Because you're one of those folks that I just keep seeing
again and again and again, but I can't actually put my finger on what it is you do. Yeah, you know,
most people aren't aware that Popsicle sticks are actually a CNCF graduated project. So, you know.
Oh, and they're load-bearing in almost every piece of significant technical debt over the last 50
years. Absolutely. Look at your bill of materials. It's there. Well, here's where I can drop in the
other fun fact about SNCC's name. It's actually an acronym, right? It stands for So Now You Know.
So now you know that much, at least. Popsicle sticks, key component to any containerized
infrastructure. Look, SNEAK is a developer security company, right? And people hear that
and go, I'm sorry, what? I'm a developer. I don't give a shit about security. Or I'm a security
person. Usually they don't say that out loud as often as you would hope, but it's like, that's not true. I say that I care about security an awful
lot. It's like, yeah, you say that therein lies the rub. Until you get a couple of drinks in them
at the party at reInvent and then the real stuff comes out, right? No, Sneak has always been
historically committed to the open source community. We want to help open source developers
every bit as much as, you know, we're helping the engineers at our top tier customers.
And that's because fundamentally, open source is inextricably linked to the way software
is developed today, right?
There is nobody not using open source.
And so we sort of have to be supporting those communities at the same time.
And that fundamentally is where the innovation is happening.
And my sales guys hate when I say this, right?
But you can get an amazing amount of value out of Snyk by using the freemium solution,
using the open source tooling that we've put out in the community.
You get full access to our vulnerability database, which is updated every day.
And if you're working on public projects, then that's going to be free forever, right?
We're fundamentally committed to making that work.
If you're an enterprise that happens to have money to spend, I guess we'll take that too,
right?
But my job is really talking to developers and figuring out, you know, how can we reduce the amount of pain
in your life through better security tooling? The challenging part is that your business,
although I confess is significantly larger than my business, we're sort of on some level solving
the same problem. And that sounds odd to say because I focus on fixing AWS bills
and you're focused on improving developer security,
but I'm moving up about six levels
to the idea that there are only two big problems
in the world of technology,
in the world of companies for that matter.
And the problem that we're solving
is the worst one of the two.
And that is reducing risk exposure.
It is about eliminating downside.
It's cost optimization. It's security tooling. It is insurance, et cetera, et cetera, et cetera.
And the other problem, the one that I've always found that is this thing that will get people
actually excited rather than something they feel obligated to do is speeding up time to market,
improving feature velocity, being able to deliver the right things sooner. That's the problem companies are biasing towards investing in extremely heavily. They'll
convene the board to come up with an answer there. That said, you stray closer into that problem
space than most security companies that I'm aware of just because you do, in fact,
speed up the developer process. Let people move faster, but do it safely, at least is my general understanding.
If I'm completely wrong on this and nope, we are purely risk mitigation, then this is
going to look fairly silly, but it wouldn't be the first time I put my foot in my mouth.
Yeah, Corey, it sounds like you really read the first three words of the website, right?
Develop fast, stay secure.
And I think that that fundamentally gets at the traditional alignment where security equals slow, right? Because risk mitigation is all about
preventing problematic things from going into production, but only doing that as a stop gate
at the end of the process, right? By essentially saying, we assume all developers are bad and want
to do bad things. And so we're going to put up this big gate and generate an 1100 page PDF and
then throw it back to them and say, now go figure out all of the bad things you did and how to fix them. And by the
way, you're already overshooting your delivery target, right? So there's no way to win in that
traditional model unless you're empowering developers earlier with the right context they
need to actually write more secure code to begin with, rather than remediating after the fact
when those fixes are actually most expensive. It's the idea of the people want to slow down
and protect things and not break are on the operations side of the world. And then you have
developers who want to ship things and you have that natural tension. So we're going to smash
them together and call it DevOps, which at least if nothing else leads to interesting stories on
stages, whether it actually leads to lasting cultural transformations, another thing entirely.
And then someone said, well, what about security? And the answer is, we have a security
department? And the answer is, yeah, you know, those grumpy people that say no all the time,
whenever we ask if we could do anything, oh, that's security department, I ignore them and
go around them instead. And it's, all right, well, we need help on that, so we're going to smash them
in too. Welcome to DevSecOps, which is basically buzzword-driven cultural development.
And here we are.
But there is something to be said, for you can no longer be the Department of No.
I would argue that you couldn't do that successfully previously, but at least now we're a little more aware of it.
I think you could certainly do that when you were deploying software a couple times a year, right?
Because you could build in all of the time to very expensively and time-consumingly fix things after the fact, right? We're no longer in that world.
I think when you're deploying every few seconds or few minutes, what you need is tooling that,
first of all, runs at that speed that gives developers insights into what risk are they
bringing on board with that application once it will be deployed, but then also give them the context they actually need to fix things, right? I mean, regardless of where those vulnerabilities
are found, it still ultimately is a line of code that has to be written by a developer and committed
and pushed through a pipeline to make it back into production. And that's true whether we're
talking about application security and proprietary code, we're talking about vulnerabilities in
open source, vulnerabilities in the container, infrastructure as code. I mean, it used to be that a network
vulnerability was fixed by somebody going into the data center, unplugging a Cat5 cable and
plugging it in somewhere else, right? I mean, that was the definition of network security.
It was a hardware problem. Now networking is software-defined.
The firewall I trust is basically a wire cutter. Yeah, cut through the entire cable,
and that is the only secure firewall. And it's like, oh no, no, there are side channel attacks. It's not
completely going to solve things for you. Yeah. Well, you know, without naming names, there are
certainly vendors in the security space that still consider mitigation to be shutting down access to
a workload, right? Like let's remediate by taking this off of the internet and allowing it to no
longer be accessible. I don't think it comes from a security standpoint,
but that does feel like it's a disturbing proportion of Google's product strategy.
Absolutely. But, you know, I do think maybe we can take the forward-looking step of saying there are ways to fix issues while keeping applications online at the same time.
For example, by arming engineers with the security intelligence they need when they're
making decisions about what goes into those applications. Because those wire cutters now, that's a line in a YAML file, right? That's a
Kubernetes deployment, that's a CloudFormation template, and that is living in code in the same
repo with everything else, with all of the other logic. And so it's fundamentally indistinguishable
at the point where all security is really now developer security, except the security tooling
available doesn't speak to the developer.
It doesn't integrate into their workflow. It doesn't enable them to make remediations.
It's still slapping them on the risk. And this is why I think when you talk about, to invoke one of
the most overused buzzwords in the security industry, when you talk about shifting left,
that's really only half the story. I mean, if you're taking a traditional solution that's
designed to slow things down and shifting that into the developer workflow,
you're just slowing them down earlier.
You're not enabling them with
the better decision-making capacity so they can say,
oh, I now understand the risk that I'm bringing on board
by not sanitizing a string before I
jump it into a SQL query.
But now I understand that better
because Snyk is giving me that information at
the right time when I don't have to context switch out of it, which is as I'm writing that line of code to begin with.
When I look at your website, and I'm really, really hoping that your marketing folks don't
turn me into a liar on this one between the time we've recorded this and the time it sees the light
of day in a week or so, it's notable because you are a security vendor, but you almost wouldn't know that from your
website. And that is a compliment because at no point, start to finish on the landing page
at sneak.io, do I see anything that codes to hackers are coming to kill you. Give us money
immediately to protect yourself. You're not slinging FUD. You're talking entirely about how to improve velocity. The closest it gets to
even mentioning security stuff is ship on time with peace of mind. That is as close as it gets
to talking about security stuff. There is no fear based on this. And you don't treat people
like children and say security is extremely important. Thank you, professor. I really appreciate that
helpful tip. Yeah, you know, again, I think we take the very controversial approach that developers
are not bad people who want to make applications less secure, right? And I think, again, when you
go into that 40-year trajectory of that constant tension between the engineering and the security
sides of the house. It really involves certain
perceptions about what those other people are like. Security are bad and want to shut everything
down. Developers are, you know, wild cowboys who don't care about sanitization and are just
introducing a bunch of risk, right? Where Snyk comes in is fundamentally saying, hey, we can
actually all live together in a world where we recognize there's pain on both sides. And look,
Corey, I'm coming to you after essentially waking up every day
for 20 years and writing code of some kind or other.
And I can tell you, developers are already scared enough, man.
It is a fearful and anxiety-ridden experience to know that
you're not completely in command of what happens to that application
once it leaves your IDE, right?
You know at some point you're going to get that PDF dumped on you. You're
going to have a build block. You're going to have a bug report come in from a very important customer
at three o'clock in the morning and you're going to have to do something about it. I think every
software engineer in the world carries that fear around with them. They don't have to be told,
you have the capacity to do bad stuff here and you should be better at it. What they need is
somebody to tell them, here's how to do things better, right? Here's not necessarily even why a cross-site scripting
attack is dangerous, although we can certainly educate you on that as well, but here's what you
need to do to remediate it. Here's how other developers have fixed that in applications that
look like yours. And if you get that intelligence at the right point, then it becomes truly, to go
back to your original question, it becomes about solutions rather than about problems, right? The last thing we ever want
to do is adopt that traditional approach of saying, you did a bad thing. It's your fault.
You have to go figure out what to do. And then, by the way, you have to do all the refactoring on
top of that because we didn't tell you you did the bad thing until three weeks later when that
traditional SaaS tool finally finished running. Exactly. It's a question of how much can you reduce that feedback loop? If I get
pinged 60 seconds after I commit code that there's a problem with it, great. I still all have that in
my head, mostly, I hope. But if it's six months later, it's who even wrote this? And I pull up
git blame and, ah, crap, it was me. What was I possibly thinking back then? It's about being
able to move rapidly and fix things as, I guess,
as far as early in the process as possible. The whole shift left movement. That's important.
That's valuable. Yeah, the context switching is so expensive, right? Because the minute you switch
away from that file, you're reading some documentation, you're out of that world.
Most of a developer's time is spent getting into and out of different contexts. Once you're in
there, I mean, you can rattle off 40 lines of code in a sitting and actually clear a ticket,
and you feel really good about yourself, right? The next day, when that comes back from QA saying
you did something wrong here, that's the painful part of having to get back in it. And by the time
you've already done that, you've doubled the amount of time you've spent on that feature.
So it's all about integrating the right intelligence in the right context at the right time and doing so in such a way that we're not throwing around blame,
that we're not saying you should have known better. We're saying we want to help you do
this better because ultimately you're going to write another SQL query. That's okay.
We hope that maybe this will inspire you to sanitize those strings properly,
and we're going to give you some suggestions on how to do that.
Yeah. Developer time is way more expensive than the infrastructure. That is, I think,
a little understood facet of how this works from an engineering perspective, because an awful lot
of us came up in this industry considering our time to be free. Because when we were doing this
as a hobby, in some cases, it was. When I was in my dorm room back many years ago, as I was
basically in the process of being expelled from boarding school, it was very clearly my time was not worth a whole hell of a lot to anyone at that point.
Speaking of expensive things, I want to talk for a minute about your pricing.
And what I like about this is, let me be clear here, I am a big fan of taking shortcuts wherever I can. And one of the shortcuts I love doing,
and I don't know if I've talked about it on this show before, is when I'm talking to a company
and I need to figure out, do they know what they're doing or are they clowns? I cheat and I
go to the pricing page. And there are two big things that I look for, and you have them both.
The first is that over on the far left side of the spectrum, it's
do you have a free option? And yes, you do. And click here to get started immediately. Great,
because it's three in the morning. I need to get something done. I'm under a deadline. I do not
have time for a conversation with sales. And as an engineer, I absolutely don't want to deal with
that type of sales process because it feels weird to go and ask my boss to go ahead and sign
off on something because I feel like my spending authority is capped at $20. Now that I have a
little more context, I understand exactly why my spending authority was capped at $20 back when I
was an engineer. Yeah, exactly right. And so it's not only that commitment to ensuring every software
engineer in the world can have access to Snyk immediately by making one click
because ultimately we're committed to that community, right?
There's 3 million developers using Snyk currently.
That's about 10% of all engineers in the world.
We're very proud of that number.
We expect that to continue to grow.
And I think it shows that there is need out there, right?
And if we can enable every engineer who's up at 3 a.m.
faced with some security prospect to say, you know,
it is as simple as getting a free account and getting a vulnerability report, getting the
remediation advice, being able to sleep easier. I think we're successful as a company regardless of
what the bottom line is. But when you look at how to scale that into the enterprise, the way security
solutions are priced, I mean, it's like throwing a bunch of wet noodles at the wall and seeing what
sticks, right? Yes. And that's the other piece of your pricing that I like,
because a lot of people are going to be listening to that. What I'm saying right now is about, oh,
well, I have a free tier. Why do you think we're clowns? It's, ah, because the other end is just
as important, if not more so, which is there has to be an enterprise tier and the price for that
has got to be click here to have a conversation. And the reason behind that is,
if you work in procurement, which is very often who's going to be reaching out on something like
this, you are going to need custom contracts. You are going to want a long-term enterprise deal.
And if the top tier is X dollars per thing that's already there, it reeks of unsophisticated vendor
to a buyer in that position. And it makes people at
big blue chip companies think, oh, they don't know how to deal with someone at our scale.
Pricing is messaging. And I think people lose sight of that. You absolutely say the right
things on both ends. I look at this and there's nothing I would change or improve about your
pricing page, which to be honest is really rare. I'm not sure all of our sales leaders would agree
with you there, but I will pass that feedback along. Well, and the other thing I would add to
that is what everyone who's in a pricing conversation wants is predictability about
what is this going to be in the future, right? And so we base our pricing on how many developers
are in your organization, right? That's probably a number you know. That's probably a number that
you can predict over time. We're not going to say how many CPUs are we using, right? That's probably a number you know. That's probably a number that you can predict over time. We're not going to say, how many CPUs are we using, right? What's the footprint
of the cloud resources we're deploying to scan your stuff? These are all things that you have
very little control over. And there is alchemy there that introduces a financial risk into that
situation. And we're all about risk mitigation at scale, right? You don't pop up halfway through a cycle of, oh, you've gone to the hiring spree. Time to go ahead
and pay us a bunch more money you didn't plan for or budget for. I've had vendors pop up a quarter
after I signed a deal repeatedly, and it drives me up a wall because back in my engineering days,
it was great. Now I have to spend time on this that I hadn't planned for. I have to go to my
boss and ask for more money. Never a great conversation. And as a cherry on
top, I get to look like I don't know how to manage vendors for crap. It's just everyone is angry
about those conversations. And even the salespeople reaching out had the decency to act a little
sheepish about having to have that conversation with me. The best ones do at least. Well, and on
top of that, you know, maybe that tool has been capped so that now your bills are breaking because
you went one over your count, right?
Yeah, I love it when you fail in production.
That's my favorite thing.
It's like, all right, we're going to wind up not scanning for security stuff anymore.
And if you go five beyond your cap, we're going to start introducing vulnerabilities.
That's awesome.
Just great plan.
I'm kidding.
I'm kidding.
I want to be very clear.
I have never heard a whisper of an actual vendor doing that on purpose anyway.
Exactly right.
And look, we want to make it as easy as possible.
And that's why, for example, we're on the AWS marketplace.
You can use your existing EDP program to buy Snyk.
50% of your spend on Snyk then winds up counting toward your spend commit, which is always
an interesting approach that some people are like, oh, so we can wind up transferring the
money that we're spending on a vendor to count toward our commit. But in many cases, it's how much are
you spending on other third-party vendors in this space? Because you're getting excited about a few
tens of thousands in most cases, and you have a $50 million annual commit. What are you doing
there, buddy? That's like trying to become a millionaire via credit card points. It doesn't
usually pan
out that way. Fair enough. Yeah. And look, we're very proud of that partnership with Amazon. And
look, hey, if they can lock some of our customers into $50 million a year spend contracts, we'll
take a few pennies on that, right? Oh, yeah. As a vendor, you'd be silly not to. It makes sense.
But you're doing significantly more than that. As of this week being reInvent week,
you are, well, tell me about it. Yeah, Corey, we are thrilled to announce this week that AWS is now integrating with Snyk's vulnerability database within Amazon Inspector. And this is going to
bring the best of breed security intelligence with a curated vulnerability database, including all of
our proprietary research around things like exploit maturity, reachability, vulnerable conditions, social trends on vulnerabilities,
all available within Amazon Inspector to any developer utilizing it. We also have an AWS
code pipeline integration that makes it easy for anyone utilizing AWS for your CICD to get
immediate feedback on vulnerabilities in your applications as they move
through that pipeline. And remember, we're never just going to say we've identified a vulnerability,
now you need to figure out what to do with it. We're always going to integrate the remediation
advice because our audience at the end of the day is the developer whose job it is to make the fix
and who has such a wide variety of responsibility these days, the best we can do
is say to them, not just we found something wrong, but here's the solution that we think
you should implement to get that secure code back out into production. This episode is sponsored by
our friends at Cloud Academy. That's right. They have a different lab challenge up for you called
Code Red, repair an AWS environment with a Linux Bastion Host.
What does it do?
Well, it's going to assess your ability to troubleshoot AWS networking
and security issues in a production-like environment.
Well, kind of.
It's not quite like production because some exec is not standing over your shoulder
wetting themselves while screaming.
But, you know,
you can pretend. In fact, I'm reasonably certain you can retain someone specifically for that purpose, should you so choose. If you are the first prize winner who completes all four challenges
with the fastest time, you'll win a thousand bucks. If you haven't started yet, you can still
complete all four challenges between now and December 3rd to be eligible for the grand prize.
There's only a few days left until the whole thing ends,
so I'd get on it now.
Visit cloudacademy.com slash Corey.
That's cloudacademy.com slash C-O-R-E-Y.
For God's sake, don't drop the E in my name.
That drives me nuts.
And thank you again to Cloud Academy
for not only promoting my ridiculous nonsense,
but for continuing to help teach people how to work in this ridiculous environment.
First, congratulations.
It's neat to have a first-party integration like that with an AWS service.
As opposed to, you know, their somewhat storied approach of,
hey, it's an open-source project.
We're just going to implement something that's API compatible ourselves and irritate people. Now, to be clear, my problem is not that
you should expect to build anything and not face competition. My concern is a little bit more along
the lines of, huh, why is that same company always the first in line to compete with something,
which is neither here nor there. Security is also one of those areas where I think competition's
important. You want a continual background level of investment in the space because this stuff is super important. What I like about Snyk and a number of companies in this space is I know exactly where you stand. with Inspector, which is a great service. But you're not, I don't believe, integrating with their other security services, such as Amazon Detective, the audit manager, if you want to
consider that one of them, Amazon Macy, AWS Firewall Manager, AWS Shield, the Network Firewall,
IoT Device Defender, CloudTrail, Config, Amazon Inspector is the one you're there, but not really
Security Hub or GuardDuty or IAM
itself. And I look at all of these services. I mean, IAM is free, of course, but the rest are
very much not. And I do some basic arithmetic and I'm starting to realize that if I configure all
of the various AWS security services together and what that's going to cost me, it turns out the
answer is more than the data breach. So on some level, it's one of those,
at what point is it so confusing
and it starts to look like a cross-sell deal
between all of the different services
and turn them all on
because you could never have too much security.
We still have to ship things eventually.
And their security messaging
has been extraordinarily confused for a long time.
At some level, the fact that you are now integrating with
them on the inspector side means that for the first time, I think I understand what inspector
does now, which is more than a little messed up, but here we are. Indeed. Well, the first thing I
would say on that is, you know, stay tuned. As we move into the new year, I think you're going to
see a lot more announcements, both, you know, on the AWS side, but also kind of industry-wide in terms of integration with Snyk. That vulnerability database feed also,
as you mentioned earlier, in use in Docker Hub. So anyone with containers in Docker Hub can get
advantage by scanning with our Snyk container tool. We have other integrations with Red Hat,
for example, and actually many other companies utilizing that DB feed to, again, get access to that best-in-breed vulnerability data.
When you talk about that model of being out-competed on the security front,
I think that's more difficult to do when you're actually talking about data, right?
Like tooling on some level, and I might get in trouble for saying this, but tooling is commodity, right?
Somebody tomorrow is going to come out with a better tool to do a thing a little bit faster in a little bit more intuitive way.
What can't be easily replicated is the data and intelligence behind that, right?
And so the secret sauce that makes you folks work is not the fact of, ah, we could fire off or catch a web hook and then run the following command against the code base. That is sure it's handy
and it's useful and you're good at that, but that is not the reason that people become your customer.
Exactly right.
Look, there's a lot of tools that can resolve the dependency tree within your open source application, right?
We can do that as well.
We leverage a lot of open source to do that.
You know, we're very open with that.
As I mentioned earlier, a lot of sneak tooling is available on GitHub.
You can see how it works.
That code is public. Really, the value we're providing is in that curated security research that our dedicated team is working on day in and day out and verifying
public security data that's out in CVEs. Is this actually accurate? Do we agree with the severity
rating? Might there be other factors that could modify that severity rating? What happens when
you are scanning an application that might have some vulnerable conditions versus others?
Don't you want to prioritize those vulnerabilities differently?
What happens at runtime, right?
If you're deploying an application to an EC2 instance with an open SSH ingress into your security group, that's going to make certain vulnerabilities a lot bigger risk than if you've got your IAC configured correctly, right? So really the overall mission of Snyk as we move into this broader kind
of ASPM application, you know, security posture management space is to say, how many different
signals across that SDLC can we combine in intuitive ways for the developer to understand
that risk at the right time with the right context and armed with the remediation advice to make a
better decision as they're writing their code, you know, rather than after the fact. If I could sum it all up,
kind of, that's the vision of where we are both today and ultimately where we're going.
There also needs to be an understanding of who the customer is. If I go through the launch wizard
and spin up in a brand new account my first EC2 instance, and I spin up an instance by going
through the wizard, the first thing it does
is yell at me because, ah, that SSH port is open to the world, which you need to get into it once
it's there. So it sets that up for me and yells at me all in the same breath. And it's, this is not
a promising start. I kind of need that to get into it. Conversely, if you're not someone learning
this stuff for the first time and you're, oh, I don't know, a production engineer at a bank, you care quite a bit differently in
that use case about things like open SSH groups, security posture, et cetera, et cetera. An awful
lot of the tooling is, ah, you're failing this benchmark and this benchmark and this benchmark
from CIS and the rest and all these rules of, oh, you're not encrypting your data at rest. Well, it's in an AWS data center environment.
Yeah, if someone can break in and steal the drive from multiple facilities and somehow recombine
them together and get out alive, yeah, that's really not my threat model, but it's easier to
turn it on and check a box and make an auditor go away. But that's not where I would spend the bulk of my energies if I'm trying to improve my security
posture. And it turns into rote checklists super easily. The thing I've always appreciated about
the stuff that your tooling in the open source world has highlighted is it's not nonsense.
And I really can't understate just how valuable that is.
Absolutely. And that comes from a combination of signals across that SDLC from the open source, from the container, from the proprietary code,
from the IAC, but then also what's happening at runtime, right? Like how are those containers
actually deployed onto EKS? What ports are open? What running binaries are on the container that
might influence what packages you choose to upgrade versus not. All of that matters. And what, you know, the issue I think now is getting that
visibility to the developer at the right time so that they can make it actionable. And the thing
about infrastructure as code, I think that's really interesting and not super well understood
is a lot of those defaults are really insecure and developers have no idea, right? Like they
might not be aware
that if you don't define that encryption for your S3 bucket, it'll happily deploy unencrypted,
right? Yes, that's a compliance problem, but that's also potentially an exacerbator of other
vulnerabilities that might be in that application. But you only see those when you can combine and
have a single pane of glass that gives you the runtime signaling plus everything that's happening in the application armed with the correct information to actually
remediate that at the time and say, don't you think you wanted to add AES encryption
to this bucket?
Don't you think you wanted to close down port 22?
And also combine that with your internal business logic, right?
Maybe for an internal only application that never transits beyond your VPC perimeter,
sure, it's fine to have port 22 open, right?
There's just going to be people within your zero-trust environment authenticating to it.
But for your production web application, that might be a different story.
There are other concerns, too.
For example, I'm sitting here complaining about the idea of encrypting at rest in an
AWS environment.
But if you've signed customer contracts that state that you're doing it, you'd better freaking
do it, as opposed to, well, I know what the actual security risk is and it's no big deal.
Yeah, don't make that decision.
If you are contractually obligated to do a thing, don't YOLO it.
Do what you say you're going to do.
That's that whole integrity thing.
Oh, sure.
And look, in a battle between security and compliance, compliance always wins, right? But from a developer perspective, I don't know that we on the front lines writing code actually differentiate, right? That am I going to get shut down and get the nasty gram
that says, you know, we couldn't launch this for X, Y, and Z reason. Now everybody on my team hates
me. My lead dev is on me. Now there's a bunch of merge conflicts because my branch is behind.
I want to get that out into production. But in order to do that, I need information on
how are all of these signals going to be compiled together in a way that, you know,
creates that red light or green light on the risk dashboard later on. But up until I think, you know, relatively recently,
I don't have visibility into that except to launch the commit, you know, start the build and see what
happens. And then I have that context switching problem, right? Because it's hours or days later
that I finally get that signal back. So yes, I think we have a compliance story to tell from
the sneak perspective as well. A lot of those same issues, you know, we're detecting, especially
with regard to infrastructure as code, but it ultimately is up to various parts of the
organization to work together and say, what balance do we want to strike between security
and velocity, right? Understanding that those are not mutually opposed. What we need is tooling and
more importantly, a culture that takes both into account and allows us to develop securely and fast at the same time. I want to thank you
so much for taking the time to speak with me about all of this. If people want to learn more,
where can they find you? And for God's sake, please don't say in your booth at reInvent.
I will not be at reInvent this year. I've had a little bit too much of the Vegas strip here recently.
No, I hear you.
Right now, the people going are those whose employers find them expendable, which is why
I'm there.
I wouldn't say that, Corey.
I think you'll do great.
And, you know, just make sure to bank all your vacation for a couple weeks after.
Look, come to sneak.io, start a conversation, but more importantly, just start using it,
right?
I don't want to give you the sales pitch.
I want you to see the value in the tooling.
And the easiest way to do that as an engineer is just to start using it, right? I don't want to give you the sales pitch. I want you to see the value in the tooling. And the easiest way to do that as an engineer is just to start using it. And if
there is value there, you want to bring it to your enterprise. I would love to have that conversation
and move forward. But engineer to engineer, like figure out if this is going to work for you.
Does it make your life easier? Does it reduce the pain and anxiety you feel before making that
commit into the production branch? And if so, then yeah, we would
love to talk. And we'll, of course, put links to that in the show notes. Thank you so much for
speaking to me today. I really appreciate it. Thank you, Corey. Glad to do it. Clinton Herget,
Principal Solutions Engineer at Snyk. I'm cloud economist Corey Quinn, and this is Screaming in
the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast,
please leave a five-star review
on your podcast platform of choice,
along with an angry comment yelling at Sneak
and how about how they're a terrible company
because they continually refuse
to patronize your side business
down at the Vowel Emporium.
If your AWS bill keeps rising
and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill
by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business and we get to the point.
Visit duckbillgroup.com to get started.
This has been a humble pod production
stay humble