Screaming in the Cloud - The Future of Application Security with Andrew Peterson
Episode Date: September 25, 2019About Andrew PetersonAndrew Peterson is the CEO and Cofounder of Signal Sciences. Under Peterson’s leadership, Signal Sciences has become the #1 and most trusted provider of next-gen WAF an...d RASP technology and one of the fastest growing cybersecurity companies in the world. As CEO, Peterson is responsible for overseeing all business functions, go-to-market activities, and attainment of strategic, operational and financial goals. Prior to founding Signal Sciences, Peterson has been building leading edge, high performing product and sales teams across five continents for over fifteen years with such companies as Etsy, Google, and the Clinton Foundation. In 2016, O’Reilly published his book Cracking Security Misconceptions to encourage non-security professionals to take part in organizational security. He graduated from Stanford University with a BA in Science, Technology, and Society.Links ReferencedTwitter: @ampeters06LinkedIn: https://www.linkedin.com/in/andrewmarshallpetersonSignal SciencesSponsor: X-Team
Transcript
Discussion (0)
Hello and welcome to Screaming in the Cloud with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud. while working on first-class company environments. I've got to say, I'm pretty skeptical of remote work environments,
so I got on the phone with these folks
for about half an hour, and let me level with you.
I've got to say, I believe in what they're doing,
and their story is compelling.
If I didn't believe that, I promise you I wouldn't say it.
If you'd like to work for a company
that doesn't require you to live in San Francisco,
take my advice and check out X Team.
They're hiring both developers and DevOps engineers. a company that doesn't require you to live in San Francisco, take my advice and check out Xteam.
They're hiring both developers and DevOps engineers.
Check them out at the letter x-team.com slash cloud.
That's x-team.com slash cloud to learn more.
Thank you for sponsoring this ridiculous podcast.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
I'm joined this week by Andrew Peterson, CEO of Signal Sciences. Welcome to the show, Andrew.
Thanks for having me, Corey.
Thanks for joining me. So let's start at the very beginning. What is a Signal Science?
And given that you have several of them, what do you folks do?
Yes. So the marketing term that we call ourselves is a next generation web application firewall and or a runtime application self-protection tool or RASP.
You can thank Gartner for that one. protect your web applications, your APIs, your microservices that you're running, basically
all layer seven type of traffic and across any type of platform that you're using it
on.
But that's essentially what we do.
In order to sort of do the disambiguation between, oh, a security vendor, I've never
seen one of those before.
Every I guess every security vendor in cases, tends to go in a bit
of a different differentiating direction, if for no other reason than it's very sad
when they don't.
But I guess, what is it that makes SignalScience different than, I guess, the typical run-of-the-mill
endless sea of folks at RSA, all independently trying to sell me something with the word
firewall in it?
Yeah, so I'll start with this. I think a lot of it is just kind of comes from where we
come from and our kind of background. We, for better, for worse, didn't wake up someday dreaming
to be a security vendor. So we're sort of the accidental security vendors in some ways.
Our background was actually building technology and products and security tools in-house before.
So me and my two co-founders, we worked at a company called Etsy.
It was about 10 years ago when we first started working together.
And Etsy, a lot of people know, it's E-T-S-Y.
It's a big retail marketplace based out of New York.
And their backstory is actually really interesting is, you know, we were
trying to build a security program there that was really counter to, I think, a lot of the kind of
security programs that we had seen before, which the kind of old model of security was, look,
we're going to be grumpy. We're not going to like dealing with engineers. We're going to blame engineers for all the bad things that they put into our code all the time that makes it insecure.
And we're going to tell them, no, they can't do anything all the time.
Yeah, that doesn't really work when the goal of the entire business and especially the engineering program there was about how do we empower people to launch code faster, to make changes quicker, to make
our systems more resilient and more reliable, right?
These are all the sort of tenants of DevOps and doing that in a culture where you're getting
these siloed teams to really work together.
So in many ways, as we built the security program there, it was probably one of the
first DevSecOps types of,
you know, I hate sort of using all these buzzwords here, but it was really about how do you get these
three teams to work better together? And the lessons that we learned in the context of that
were, you know, not only is it really helpful when the security teams can not just say no,
but they can say, you know, yes and, and think about how they can really sort of contribute to
making these teams better. But when you actually start thinking about if they can really sort of contribute to making these teams better.
But when you actually start thinking about if we can build products as a security team
that are not only incredibly easy for people to use,
but also make the sort of engineering teams feel like they are able to learn things
about the behavior of people using their applications or using their software
in ways that they never were before, that's actually helping them do their job.
They're actually going to want to pull and actually use those tools.
And so that, I think that that's been the unique approach that we've had to becoming a vendor is to say like,
look, if we're, if we're going to go to the dark side and go to that other,
the sort of bad place of security, which is, you know, the world of security vendorship,
like we're going to do it with a lot of empathy for understanding like what actually works from a practical perspective,
because we were in-house before building a lot of this stuff. But also that our philosophy was that
the only way to scale security, you know, sort of effectively is to scale it actually through
the engineering teams. So we sure as heck better be working with and taking their feedback into account the entire time.
Absolutely.
The hard part, I think,
when you're running an application or a website
or any significantly scaled out service or product
is security has always been one of those things
that is inherently an afterthought
for most of us because everyone likes to say, oh, security is job zero or security is the
most important thing.
Well, a quick look at what companies spend research and development budget on proves
that is not true.
It's, it always is something that it's like insurance.
No one, most people should have some form of insurance, but you don't expect your house to burn down.
So it's never the number one thing you think about
when you're setting up something new.
But it does need to be something
that I guess folks care about.
I mean, I come from a similar perspective
where I look at cloud costing.
It's never job one.
It's always a trailing function.
How does that manifest for you,
both among your clients,
as well as for running a company yourself
and having a good security posture internally,
given that you are a security company
and security issues would be problematic?
I mean, it's a great question.
This harkens back to, you know,
I'll just sort of use our initial experience
when I was at Etsy and House before,
because you're really struggling over
and wrestling over these issues of, you know, every security person on the planet wants to say,
hey, security is the most important thing, right? And that's all that matters. And that's what we
should be prioritizing first. But I was running product teams before. And our goal of developing
products and features and software were really way more business related to say, hey, you know,
we have to get these features out
because we're trying to help the business improve, right?
We're trying to make money,
we're trying to help our customers,
we're trying to help people actually get things done.
And the initial work with our security team
was they were like,
hey, you have all these potential bugs
or vulnerabilities in your code.
So before you actually can ship this to production,
you have to solve these things.
Yeah, so that doesn't really work because guess what?
Like the business is going to move forward with or without security.
So that's kind of the former relationship that we've seen.
The thing that we've seen both with our customers now,
but our big aha moment when we were doing this stuff in-house before was,
look, if you're going to go and you're going to talk to an engineer and tell them that they have
security flaws in their code, they're going to come back and say like, well, yeah, that's one
of many bugs that I have. I know I have bugs in my code. My question is, why should I prioritize working on this one over other functional bugs that I could go solve that are actually going to help our product actually do better and help our customers actually use the technology better?
And in the past, I think a lot of the response from the security team was because they're like, well, because security is important and don't you not want to get hacked?
You know, look, that's I sort of get where that's coming from, but it's not terribly productive.
And it certainly doesn't sort of speak to the way I think engineers and especially modern engineering organizations are thinking about this stuff.
They need data.
Like you need to have some data behind why these things are important.
And so for us, what really changed our conversation around that type of thing specifically, right, like about why should we build in security initially?
Why should I even be fixing some of these bugs that I know are security bugs in the first place was, well, when we could set up monitoring on being able to track actually what attackers are even attempting to do across, you know, especially let's just use the application itself across the different, you know, parts of the application.
It really changed the conversation, right? Like before, I think engineers really thought and when we would have these conversations, they'd be like, well i i just don't think that we're actually even being attacked right now so you know security guy with the the
tinfoil hat over there that's that's super paranoid about everything of course they're going to be
screaming that we're going to be getting attacked all the time but i just don't really think it's
happening so the easiest way to be able to like sort of respond to that was to say okay well we
set up monitoring to track like different types of attack behavior that's happening on different parts of the app. And, you know,
at least we could show, look, this is the subdirectory or the mobile site or, you know,
whatever part of the application that you're working on. And here's the actual attacks that
are happening on that right now. That really made it not only real, right? Like it's like, okay,
this is real data that we're looking at right now.
And this is actually really helpful.
But then it immediately actually got alignment
internally in the organization to say,
hey, you know, developer team,
I'm not actually fighting against the security team
who's just on my back all the time
trying to get me to fix things.
You know, security team and development team
are now aligned against the real problem,
which is the attackers on the outside who are trying to get in.
So that data and that visibility slash ability
to have that kind of detection on that type of behavior,
it just completely changes the conversation
that you can have between your security
and engineering teams in-house.
I think there's also the, we're seeing an emerging, I guess, class of vulnerability as far as when people wind up going to sleep at night and they work in a company, their prayers before bed are,
and finally, dear Lord, please don't let me be subject to a breach. But if I am, at least have it be something incredibly convoluted and clever,
not something stupid like an open S3 bucket or whatever it is that winds up.
Effectively, there's this narrative that's entered the public consciousness
that when a company suffers a data breach,
that they are obviously idiots who did not invest at all in cybersecurity
and they failed a very basic thing.
And I don't think that narrative works anymore.
I think that there's a lot of nuance to this.
I think that there's a tremendous number of interesting attack vectors that need to be
defended against.
Despite what we tell ourselves, it's never going to be the topmost job for a company
to care about.
But this stuff still happens. And yes, it is a failure, especially when it's not your data be the topmost job for a company to care about. But this stuff still happens.
And yes, it is a failure, especially when it's not your data that gets breached,
but rather the data that you've been entrusted with.
But in the public consciousness, it's still, oh, you got breached.
You must hire morons.
It isn't true.
It simply isn't.
Do you see that narrative changing at all in the public awareness?
Or is that a losing battle from the get-go?
Well, I do.
And I actually think it's a really important question because there's kind of two sides
to this, which is one is, you know, is it a losing battle for companies to sort of try
to change how they're actually sort of protecting themselves in the first place and try to change their security posture.
I think the second question that I think a lot about
as it relates to just kind of security professionals overall
is like, is there any way to win at security,
like at your job?
Like, are you basically just sitting there waiting to lose?
Which I think by and large it kind of is,
or at least it kind of has been for a long time.
But I think the thing that's changing and kind of the hope that I have for the industry that's changing a bit is,
I like to use the example a lot of like how operations has changed and how like sort of success for ops teams have changed. And I think in the past, you know, you sort of look 10 years ago about when ops teams and or DevOps teams are a lot more immature. The expectation there was
like, look, we have to have 100% uptime. We will never go down. And it's a binary concept, right?
Like we're either up or we're down. And the goal is 100%. Very similar to
security, right? Like the goal is either we're breached or we're not breached. And there's no
sort of middle ground and nothing else matters. We should just try to be never breached ever.
Trying to and the reality is of if that can actually be happening in the more and more
complex technology world that we live in, where as you said, Corey, like there's more and more complex technology world that we live in, where, as you said, Corey,
like there's more and more nuanced ways where people can actually get access to data and what
a breach looks like is going to be totally different in the future. I think we need to
really see a maturity the same way we've seen it on the upside. I think like now when you look at
really great ops teams and you look at sort of how the success of those ops teams is even measured
in the first place is that it's not about uptime and downtime necessarily. But, you know, if you
do go down, you know, it's only a small functional component of your application or a small functional
component of the infrastructure. You're also doing a really good job of being able to identify when
those things go down and communicate that back to your consumers. You're a really good job of being able to identify when those things go down and communicate that back to your to your consumers.
You're doing a good job of actually defining and fixing those things faster.
And so that the success metrics are not are you up or you're down, but it's how fast have you identified it?
How small can you can you contain the impact of that service outage?
How fast and how how well you can actually communicate that
back to your customers.
And then ultimately, you know, you're going for
how small of an impact can you actually have
on their business and or their lives
or their use of the product.
And I think that's really where I'm hopeful
and starting to see the security community go to,
but also like, I think I'm also starting to see this
from the consumer's expectation
is that so many consumers that I talked to
or just friends and family even are saying like,
I feel like having my data get breached
on various companies is kind of inevitable.
And their sort of judgment on how, how that, how that breach
actually happens.
Um, you know, I, I, I hate sort of picking on specific breaches, but I think like the,
the Equifax breaches, for example, has continued to stay in the limelight because of how poorly
it was handled and not necessarily because of the exact breach itself.
I mean, how many other breaches have come and gone in the last few years?
And the Equifax one keeps coming up, I think, because of a lot of the ways in which the
management team and or the communications around it was handled.
So I think that that's the stuff where it's like, look, if you have really good communication,
we can start sort of scoping out our actual architecture and infrastructure such that we can reduce the surface area or the amount of data that actually gets breached in a given attack.
Those are going to be things that I think are bigger sort of success factors for security teams and security people.
And I'd like to think that's the future of what
consumers are going to look at to say, hey, this company really handled this well. Not just saying,
oh, they're just another one that got breached. They must all be dumb. But wow, like, you know,
of course they got breached because it's kind of inevitable to have that happen to some extent.
But I really feel like they were on top of their game. They really communicated this well
to me. And I actually feel in some ways safer knowing that they're so well-informed and we're
so fast to take action on it. I see an awful lot of companies with the mistaken idea that, well,
we're paying a large cloud vendor to run all of our infrastructure and they have a bunch of
services that they offer of varying degrees of utility. What do we need partners for?
Why can't we just have everything be first party and that's the end of it?
And the honest answer to that is, well, have you tried it?
That's why.
But you can't exactly say that to customers in some cases.
How do you find those conversations tend to unfold?
So there's a bunch of different things to unpack with this
because I think it's,
yeah, there's a bunch of angles to that.
I think the first is,
one of the things I've heard
from a lot of customers is,
you know, let's use AWS as an example
and sort of,
look, let's actually compare AWS and Azure, right?
Like as two different platforms here.
The one thing that folks say,
like AWS has a lot of features, right?
They've launched a lot of different types
of functional features around security.
And one of the biggest challenges
that I've heard people have using AWS is,
you know, especially if they're like,
let's say it's a development group.
They actually have all the intentions
of doing the right thing
by setting up the right security features in the first place.
But they're not security people themselves.
And when they talk to their sort of, let's say, network-focused security teams, the network-focused security teams don't actually give them a great roadmap for what to use in the first place.
So they're kind of on their own to try to figure out what to select to use from a feature perspective.
And, you know, they're not going to take one of all of them.
They're not going to be like, OK, I'll turn on 100 different features.
They're trying to figure out what are the basic ones that they start working with and turn those on.
And they're not really getting a whole lot of direction, I think, from the Amazon folks right now. So this is one of the areas that I've heard like Azure in some ways is actually more preferable
because it's a bit simpler and a lot more well-defined
about like, hey, here's kind of a reference architecture
from a security, you know, the security feature component
of what you should use when you're using this, right?
So that's sort of step one is,
I think folks need a little bit more guidance
on what they should be using or not.
Then step two would be, I think, to your point, Corey, it's like when they start using these features, the question is, OK, they're there.
But are they actually good and are they solving real problems?
And can I automate these things? And can you know, are they helping me to actually like stop real problems or are we kind of, are we reverting back to the, okay, well, if I just turn it on and I have it
there, then I've covered my, you know, covered my rear and, and I'm not going to get in trouble
from a compliance perspective or something. I don't like this, right? Like I think there,
there are certain people that are like, okay, I have some of these pieces in place.
I'm just kind of checking boxes.
Because to me, that's a reversion back
to compliance-based security rather than security
that's really focused on solving problems.
But this gets back into, you know, this issue
where it's really hard to find people
that have a lot of not only, let's call them cloud
and application
development skills but then also have security skills most of the people that we have in the
security world have sort of a network focused background and most of the application developers
really know applications but they don't necessarily know security so that cross-section between the two
i think is really uh it's it's it's hard then to set up systems to be able to say,
hey, here's kind of the features or the functionality that we're expecting
from these different types of products that we're going to add on in our cloud environments
so that they can actually take some type of objective view on the value
or the efficacy of that feature or that function.
Something you said just really resonates with specifically the idea of treating security as
something beyond the checkbox for the compliance dance. For anyone who's ever listened to me for
more than 30 seconds, this will come as no surprise, but I have challenges when it comes to
checking off box items and doing things for the sake of bureaucracy. I have zero tolerance for that, which makes me, well, not a great employee,
but that's beside the point.
It tends to make me not the sort of person you want in the room
dealing with auditors and dealing with compliance
because I tend to see those checkboxes and get at the,
okay, what is the actual intent behind this control?
What is the problem it is attempting to solve for?
And you step down that path and try and solve the actual issues. behind this control? What is the problem it is attempting to solve for?
And you step down that path and try and solve the actual issues.
Auditors want the box checked.
They want to make sure that you're rotating
your API credentials for your IM users
every 60 days, for example.
Even NIST doesn't recommend that anymore.
And the real world that we live in here,
well, if you compromise a credential by
checking it into a repository on GitHub, the time between that happening and the time you start to
see it being exploited is less than a minute. It's a 90-day rotation or 60-day rotation. It does
nothing to stop that. In many cases, the alarm that goes off that shows that that's been compromised
is the bill. Surprise! You've been mining a whole bunch of Bitcoin this month. That's where it really tends
to fall to, I guess, fall by the wayside. But you can't, as a company, ever bypass compliance and
say, yeah, it's a stupid requirement, so we're not going to do it. You don't get the beautiful,
shiny certificate that you need to remain in business if you go down that path.
But how do you reconcile that?
Well, in general, I think the more queries of the world that can be running security programs,
the better, I think, for most everyone.
So we are fully in the camp of, as a product category, we help to check compliance boxes for a lot of our customers.
But we have we have kind of from the very beginning basically told people unapologetically,
like we are not in the business of solving compliance for people. We're in the business
of solving security problems. And if we can do both those things at the same time, great.
But, you know, the people we work with and the people that we're really seeing start to take over the security industry are really those that are highly focused and highly sort of engineering focused on exactly what you're saying.
Like, I'm looking to understand what the actual problem is I'm trying to solve and then come up with solutions to those problems. I think there's probably a series of security vendors out there that are terrified about this movement that's happening where you're getting more and more sort of less auditors controlling security programs.
Although there's certainly still compliance and audit programs within every company, including our own.
And there's an absolute sort of I think there is a world where those things are actually still valuable. But splitting compliance and security, I think, is actually quite important to the future of being able to solve these problems.
So, yeah, as it relates back to the original question around, like, you know, how do we sort of separate checkbox compliance that's not actually doing anything from real compliance, one of the positive movements I've really seen is that the actual
compliance standards, the people that are writing those compliance standards are actually becoming
more pragmatic about being able to solve these problems instead of just having sort of a checkbox
for checkbox sake. So that's one of the things that I've seen is actually like a much more
relaxed definition of different types of solutions so that on the actual sort of security engineering side, or let's call it the security side that's
focused not just on checking the checkbox, they really can start to say, hey, this functionality
that we have here that's really solving the core problem that the sort of spirit of what the compliance checkbox was trying to
check, like we're able to actually still check the compliance checkbox, even if it's not falling
into that exact definition, because they're either changing the definition to make it more relaxed,
or I think, you know, the actual auditors themselves are starting to understand and
get smarter about being able to be lax on those things.
So I think that's been a really great change to that sort of part of auditor versus security.
I think the other thing that you brought up, even in some of those examples, which is like, look, I don't actually care necessarily about rolling creds every 60 or 90 days. I really care about when someone has actually compromised those credentials,
because that's ultimately what is the root of the problem that you're trying to identify.
So focusing then and trying to get capabilities around detecting when that happens,
and then ideally having some sort of automated response to be able to actively respond to that issue.
That's really where like the technology based or the sort of engineering based security group goes immediately to saying that's how we identify that problem.
That's how we solve it.
And that is heads and shoulders or light years ahead of where we were five years ago of just being like, oh, well, we have this basic change control in place.
So, you know, everything's good.
Yeah.
We're also seeing security, from my perspective at least, emerge in different directions as far as you have, I don't know, a system that's designed to do one thing.
But you take a look at what its permissions are scoped for, and it has the capability to do an awful lot of other things. Now, on the one hand, there's the
first approach of, hey, how about we alert when it does any of those other things, which is great
and handy and useful, but in some ways the better approach might almost be, why don't we take away
those excess powers that it doesn't need? The principle of least privilege seems to have, in
some respects, fallen by the wayside.
And I don't think it's intentional. I think it often starts as, oh, we're going to make
it work, so we're going to start with a broad scope and we'll come back in step two and
narrow it down. But we never get to step two. It gets dropped and we move on to other burning
fires.
Yeah, this is a tough one because we've sort of lived this in practice from, again, sort of previous lives where, like, if you are sort of living in this sort of DevOps world, which to me, you know, I think a lot of it is about sort of developer empowerment and really being able to actually change who the power groups within these, you know, ultimately sort of political organizations are, which is like, you know, the folks that ran hardware used to have a lot of that power because they had huge
budgets to buy big hardware pieces. And now a lot of the investment money is actually going
into the development organization and actually building software. And so guess what? The power
is going over there as well. So the sort of default, I think the default attitude for a lot
of those groups is to basically say, I should have access to everything to be able to do anything I want at But like, you got to have a responsible kind of
conversation around that, which is, I really think sort of things like GDPR are really lending
themselves to saying, okay, well, especially if we're thinking about this from a data access
perspective, let's really think about sort of privacy and data privacy by design being something
that we implement at the beginning, such that, you know, we can not only limit sort of access for different people internally to different
types of data sets, which I think is just a great thing to do from a security hygiene perspective in
the first place. But it's also, you know, actually falling into this compliance standard that we need
to follow now because of things like GDPR. So this is where these are these, I think, good changes
that are happening in the industry right now
as it relates to how we're thinking about
sort of implementing new types of compliance standards.
I think there are new compliance standards.
I think they give a lot of people headaches, I think, sometimes.
But I think the intent of what they're trying to do
is good not only for consumers and sort of access to
the data around that, but I think it's also good just as a basic engineering practice to make it
so that not everybody has access to all different types of data internally.
No, I think that you're absolutely right. It's an evolving question about what the right security
posture is and how that winds up mapping
to an individual organization's needs and requirements. The hard part is figuring out
where people fall on that spectrum. And then of course, figuring out how you're going to invest
in that before you get to the point right after you really, really, really should have been
investing in this. Yeah. Well, and it's, you know, to be totally fair, it's not an easy conversation.
It's not an easy change, I think, for people to make because there are meaningful trade-offs
between access to data and speed versus privacy and security architecture or sort of responsible
security architecture. And I am actually not in the camp of saying I'm going to dictate this is
exactly how it should be this way or the other.
But I think at the very least, things like GDPR, again, I think are forcing people to have these conversations. And it's good to just have the conversation because let's put it this way.
If you want to go down one road and you're going to say, hey, you know, this is a more risky path to go on because we are making a lot of these tools or a lot of this data or a lot of
these systems, whatever you want to call it, like a lot of these things available to more sets of
people internally than we would on a, you know, on a, on a decision that would actually be more
sort of optimized around less people having that data, but you've made that consciously
and you've actually had that conversation internally,
where I think, you know, in the past,
the default was just to be like,
look, we don't even need to have that conversation
because it just wasn't even something
that people were thinking about at the beginning.
And they probably would have made different changes
or they might've made different decisions
on that architecture or on those internal policy decisions
if they had had that conversation in the first place.
I think that it's always a hard part to wind up getting buy-in.
And to some extent, a company's security posture is almost entirely going to be dictated by
how effectively information security leadership is at articulating a vision
and telling a story. If we want to be cynical about it, we could even extend that to spreading
fear, uncertainty, and doubt around what could possibly happen and scaremongering in order to
drum up budget. I mean, hey, whatever it takes. I mean, I think it comes back to, again, these are
actually, it's nice to have some recurring themes, I in what we're talking about but it's those those teams and security teams that that i've seen have way more
success at being able to either create a culture of security that's that's more um you know more
embraced internally um and or just creating um kind of tie-ins with other different business
units internally are the ones that are able to
show and use sort of visibility, like basically making investments into visibility around what
actual attackers are doing across their system versus just kind of, again, sitting there and
saying, the sky is falling, the sky is falling. We need to focus on security. And, you know,
if people aren't, then they just revert into this, well, nobody ever cares about security. And, you know, if they, if people aren't, then they just revert into this,
well, nobody ever cares about security and, you know, we're never going to get anything done
unless we have buy-in from the top. You know, it's, it's just not, I don't think that's an
effective route to do things. And I think, I don't think it ever will, but being able to use data
and use sort of real-time information and, and, and visibility, again, that you can point
to to all these teams internally to say, like, look, this isn't a theoretical thing.
This is a real thing that we are being attacked in these different places all the time.
And what we're going to do is we're going to be smart about how we set up our security
programs to make it so that it doesn't hinder your job.
Ideally, it would actually help you do your job better.
But at the very least, we're going to make this stuff so easy and really understand your goals as different business units internally to
make sure that we're not impacting those goals. Like, yeah, like that is a completely different
way to approach that discussion rather than just being, you know, the guys that say no to everybody
all the time. Absolutely. Andrew, thank you so much for taking the time
to speak with me today. If people want to hear
more about what you folks are up to, where can
they find you? Yeah, for sure.
I think everybody kind of runs
at this point, everybody's building some type
of software, everybody's running some type of web
application or service.
All these themes that we're
talking about today really kind of fit into what we're talking
about, which is we help give you visibility over the people that are trying to impact or attack those
different layer seven architectures that you guys have. You can come find out more at
signalsciences.com. Promise we won't browbeat you with too much vendor speak.
People will hold you to that. Thanks again for taking the time to speak with me today.
I appreciate it.
Yeah.
Thanks so much, Corey.
Andrew Peterson, CEO of Signal Sciences.
I'm Corey Quinn.
This is Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com
or wherever fine snark is sold.
This has been a HumblePod production.
Stay humble.