Risky Business - Snake Oilers: Sandfly Security, Permiso and Wiz
Episode Date: October 1, 2024In this edition of Snake Oilers we hear pitches from three security vendors: Sandfly Security: An agentless Linux security platform that actually sounds very cool ...Permiso: An identity security platform founded by ex FireEye folks Wiz: The cloud security giant is getting in on code security scanning You can watch this edition of Snake Oilers on YouTube here.
Transcript
Discussion (0)
Hey everyone and welcome to another edition of the Snake Oilers podcast here at Risky Biz HQ. My name's Patrick Gray. Snake Oilers is the podcast series where we get vendors to come along and pitch their products so that we may better understand what it is that they do. This is a sponsored podcast series and that means everyone you hear
in one of these Snake Oilers editions paid to be here. So we'll be hearing from three vendors today
Wiz, Sandfly Security and Permisso. Wiz, I mean we all know Wiz, they've got the
cloud security giant but they've got some new products they want to talk about as well. Permisso is an identity security play and Sandfly Security is a really interesting
Linux security platform. And we're going to start with them. So let's start with Sandfly,
a really interesting product, which is designed to detect attackers and malware on Linux machines.
So that part sounds boring,
but the interesting part is how they achieve this
without an agent.
To me, yeah, this is a far more interesting approach
than just seeing yet another EDR equivalent for Linux.
In essence, Sandfly logs into your Linux fleet
to run diagnostics at configurable pseudo random intervals,
and it'll drop a bunch of little Go binaries on them
that collect all sorts of system information,
then bring that back for crunching.
So here is Sandfly Security's founder, Craig Rowland,
describing the product and how it works.
Sandfly basically is an agentless intrusion detection
incident response platform for Linux.
The core idea is that we never want to leave
any Linux system unmonitored.
So we can work on systems up to 10 years old,
all the way through modern cloud deployments.
We can also work on embedded systems.
So basically the Intel, AMD, ARM, MIPS, and IBM Power CPUs.
So essentially one product can run on all these systems
without deploying an endpoint agent.
This makes it not only very high performance,
but also extremely safe. We basically have a reputation of never tipping over
Linux systems, which is very important, especially in critical infrastructure that often runs Linux.
Yeah. Now, I mean, this is one of the challenges for creating like an EDR equivalent for Linux,
which is that there's so many different kernels, right? There's so many different distros,
there's so many different architectures and trying to create something that runs nicely,
that plays nice with every flavor of Linux out there is just, I mean, forget it, essentially,
right? Yeah, it's very difficult. And, you know, essentially, once you start asking questions,
you know, someone says, hey, we do Linux EDR. Well, then you ask, okay, like you point out what distribution, what kernel, what patches, what configuration,
some people have custom kernels that are specific for the organization. So, you know, the marketing
says this, but you find once you have some questions on deploying an agent actually becomes
this. And it becomes very risky to support all those different distributions. So you just can't
essentially, if you tried to do QA
with a traditional agent
across all these Linux vendors,
virtually all your time and money
would be spent on just QA alone.
You wouldn't have any development time.
So what we did instead
is we just got rid of the riskiest part,
which is the agent.
And by not tying into the kernel
and not doing very risky things
in terms of sweeping memory
and stuff like that,
we vastly expanded the visibility across all the platforms, but we also made the coverage,
we feel a lot more in depth in terms of what we can actually detect.
Okay, so why don't we walk through how this thing actually works, right? Because I did find this
interesting. I mean, essentially, it's like, I mean, you essentially get creds for the machines
that you're trying to monitor,
and then you drop what you call sandflies on them, right?
So why don't you actually walk us through, you know, the means of access and then actually what your tool does?
Sure.
So first of all, we chose the most ubiquitous way to access Linux, which is SSH.
So SSH is on virtually everything, again, from cloud systems all the way to, you know,
$30 VPN you're buying off Amazon. So we basically access
the systems over VPN. We can use credentials you give to us. We could tie into a key vault,
basically whatever it is typically we can integrate in with the customer, how they manage it.
At that point, we use a small Golang binary that has custom designed forensic investigation engines
that are designed just to handle Linux. And then we use a series of modules
we push over called Sandfly modules. And they are basically telling that remote engine kind of what
we want it to look for. After we process that data and get what we need, the binary essentially
destroys itself, and then systems back to where it was before we even got there. This process
typically takes 30 to 60 seconds every time we show up for a scan
and then we get off the box.
And it doesn't really result
in any type of serious performance
and or stability impacts in that remote system.
So what sort of data are you actually using these,
you know, little go blobs to collect, right?
And what sort of permissions do you actually need
on the hosts that you're trying to monitor, right?
Because I'm guessing that's a question
in a few people's heads listening to this right now. Yeah, it is. So
essentially, we've broken Linux up into multiple components that are of interest to us from an
attacker perspective. We're probably more accurately called compromise detection. So
essentially, we will focus on processes, users, log tampering detection, directory and file activity,
and something we call policy type
checks. So essentially, it depends what we want to do. So we're looking for processes that are
acting suspiciously, maybe hiding or doing things that we know is malicious activities, such as,
you know, how they're, how they exist on that system. We're looking at users from the perspective
of does this user appear to be maybe a backdoor or something odd about that user and what they're doing.
We look at log files in terms of tampering.
A lot of people think we're looking at log files.
It's actually not what we do.
We deal with Linux malware all the time.
Linux malware goes to extensive lengths
to avoid showing up in log files.
So if you're only looking at log files,
you're gonna miss the worst part
of Linux malware guaranteed.
But it will often tamper with a log file.
So if you could see that a log file has been tampered with,
they're trying to hide themselves, that's actually useful.
And finally, the last two parts too,
we look for files and directories that are suspicious,
maybe being used for payload data
or other kinds of suspicious activities
that again can indicate that system's compromised.
Typically, we work a lot like an Ansible system
in terms of we need to have access
as a regular user, some type of pseudo access to access some of these privileged areas. It's just
the way Linux is. There are ways you can control what we do in terms of what our product can do
once we get on the remote system. That's probably too much to get into in this interview, but there
are other ways to restrict that as well through pseudoers access and things like that. But that's
essentially how the system works. It's very fast automated um forensic investigation and intrusion detection tool now i mean it sounds
essentially what you've described is kind of like what a human being would do when logging into a
linux system to make sure it's okay right like just having a bit of a poke around having a look
is this right is that right i mean is that sort of where the concept came from which is like why don't we just sort of automate the steps that a human being would
would take when logging into a linux system and and curious about whether or not someone else is
there yeah i mean essentially if you could automate a really pedantic you know linux security expert
that never misses anything that's essentially what we've done. But more than just the intrusion detection,
we've added some other components as well
because we also track SSH key usage
because SSH keys are frequently stolen
and used in the move laterally across systems.
So we want to track those.
We also have a built-in password auditor
to find weak default credentials
that are often the first way
people get onto a system.
We also added on another component
called agentless drift detection, which allows us to detect changes on the system that might get onto a system. We also added on another component called agentless drift detection,
which allows us to detect changes on the system
that might not have been authorized.
And finally, the last part,
we also had the custom SANFLY detection as well.
So it is almost like you have a full security team
with all these capabilities built in the product.
And we do that traditional EDR,
but you have these other components as well,
what a threat hunting team might actually want
in terms of also addressing network security problems on Linux.
And I'm curious too,
how does this thing go with appliances, right?
Because another big problem for like Linux security agents
is a lot of Linux in an org
is actually some pizza box one-RU thing
that is running Linux,
but you can't really you know crack
it open and install stuff on it so can i'm guessing this is very case by case depending on the level
of access that users can get via ssh to appliances but are people actually using this to interrogate
appliances yeah absolutely again because we support intel amd arm mips and ibm power cpus we can get
onto most linux, assuming they allow
shell access, right? So for instance, we had some customers contact us because they found
command and control servers running on their Synology network-attached storage devices.
Traditionally, you can never get an agent on those devices. We could run on them just fine.
Things like Ubiquiti routers, we can run on those. We have one customer using us to watch IP cameras,
again, running MIPS processor, the very tiny Linux kernels on it, but we can run on those. We have one customer using us to watch IP cameras, again, running MIPS processor,
very tiny Linux kernels on it,
but we can run on this system just fine.
And what we're finding,
a lot of attackers actually are starting
to go after these edge devices
because they're frequently unmonitored.
As you point out,
it's almost impossible to get an agent on them.
Plus they tend to concentrate
a lot of really important data.
For instance, I mentioned the Synology NAS.
It has a lot of data, important organization.
Or your VPN router.
You're running everything through your ubiquity box, through the VPN.
Well, I mean, all the data is being concentrated before it's being encrypted.
That's a very valuable choke point for attackers.
So a lot of attackers are deliberately going after these edge devices just because they know the security is so bad. they know that the monitoring is very unlikely to be happening there.
Now, I know you've got a couple of gargantuan customers that you can't name.
One of them is just massive, right?
So what, I mean, perhaps you could share with us the sort of scale that you can handle because
that customer that I'm thinking about, I mean, the scale is certainly there, right?
Yeah. customer that i'm thinking about i mean the scale is certainly there right yeah so you know we have
some customers with uh i'll keep it vague in excess of tens of thousands of endpoints and then
other customers in the thousands of endpoints generally we attract very critical infrastructure
as our initial customers just because they have a need to watch a lot of different linux systems
with a lot of varied configurations and status. And they also
need to do it again without risking any type of system downtime. They're the type of customers
that if they go down, it would be like major news. So we're very, very careful about how we
approach that problem. And these customers respect that just because they understand the sensitivity
of what happens if things go sideways on their networks. I mean, I think the customer that I'm thinking about would probably best be described
as critical internet infrastructure as opposed to what we normally think about when we talk about
critical infrastructure being pipelines or ICS, right? Yeah, we do. We got a lot of interesting
customers who definitely, like I said, they make up a lot of what people use every day in terms of
what they consider the internet. We even have telcos as well using us. So again,
if mobile infrastructure goes down, I mean, your cell phone's not working. It's a type of thing
where it's a very serious type of business. So we kind of run the gamut of what we're protecting
right now. Now, is that a use case about security per se, or is it about detecting just general mistakes
and undesired config drift and things like that? Are people using it for sort of as a resiliency
tool or is it primarily just security? We use both. So I'll give you an example.
One customer in Europe got us initially to do the EDR component. And then we released the
Sandfly, the SSH hunter, which tracks the SSH credentials. And they realized actually that
their SSH key management was such a problem. They actually used us at first to clean up
the SSH issues. And then they went back to the EDR side. And then we have other customers that
are using us in the drift detection mode, because they might be running a bunch of systems that aren't being updated or changed a lot.
So they just want to know if anything were to change.
You know, if you're running up, you know, for instance, I bring up Ubiquity or Synology system.
You know, if a new process were to start in that system that you've never seen before and the system hasn't been updated, you probably should be looking into that.
Right. This could be a very serious issue.
And, you know, a lot of malware on Linux won't drop onto the file system.
It'd be a fileless malware.
So, you know, just doing a standard file integrity check
is not going to find some of these things.
So we do have customers using us for different reasons,
often all at the same time.
So they'll use for the EDR, they'll use for the SSH,
they'll have us looking for bad passwords,
and then they'll use drift detection on certain configuration systems as well, just to make sure nothing's gone wrong. That SSH key management
cleanup, was that a custom job, or is that something out of the box? Out of the box,
you call it SSH Hunter. So what it currently does is it tracks, basically, when we get onto
a system, we're going to look at all the authorized keys files that we see users use.
Authorized keys files on Linux are a notorious problem. They often get stale keys, old keys,
keys put all over the place that nobody knows about. So we'll index those and we basically
will track who's using them, where they are, how long we've seen them, and some other criteria as
well. We recently introduced a new feature too called SSH security zone. So you can actually
set up groups of systems by tags. You could say, these are our production systems and these are the 10 keys
allowed to go on those boxes. So if an 11th key were to show up there, you would get an alert.
So you know that someone has added a key to your production boxes in this example, just to show.
So we get customers using that as well. And a lot of times we do other things as well,
such as we'll just look at your system to see do
we see private keys lying around like you know what one person reported we found hundreds of
private keys lying in a world readable directory in one of their systems that was left behind by
a management product they're like well you know how to get there i'm like i don't know we're just
telling you they're there right you need to figure that out why how they got there or even just
unencrypted keys this type of stuff it's a really big problem because what happens is the attackers, they'll get onto a first box.
It doesn't matter how they get on, but then they will then search that system for credentials, either a bad password or typically an SSH key.
And if they find a private SSH key that's unencrypted especially, they're off to the races.
They're going to use that with transitive trust to start moving between your network.
And then they'll keep doing that once they get on other systems.
So it's very, very important to know SSH access
and kind of what's going on there.
All right. Well, Craig Rowland, thank you so much for joining us.
The company is Sandfly Security,
and you can find them at sandflysecurity.com.
Fantastically efficient pitch.
Really enjoyed it, and we'll talk to you again soon. Cheers.com. Fantastically efficient pitch. Really enjoyed it.
And we'll talk to you again soon.
Cheers.
Great. Thanks a lot.
That was Craig Rowland there with a chat about Sandfly Security,
a really interesting Linux security product.
And you can find them at sandflysecurity.com.
It is time for our next snake oiler now, Permisso.
Permisso is an identity security play and
identity security products are really a big thing at the moment. The idea with Permisso is that it
can create an identity graph from your identity directories and then an activity graph that
allows them to identify identity-based attacks and compromises. So here's Jason Martin from
Permisso to walk us through their platform.
Enjoy.
Yeah, so we plug into the control plane of your IDaaS, your SaaS, IaaS, and PaaS environments.
And then we're looking at what entities are in that environment and how they're configured
to access your environment.
And then really more importantly on the threat detection side, what are they doing?
And a really important part of how we do this is read-only.
We don't want to be another vector of over-permissioned access into any of these environments.
So read-only in different, let's say in Opta, it might be via a read-only admin token.
In Microsoft's ecosystem, it might be a delegation via an app.
In AWS, it's a role that we assume that has a specific small set of
permissions we need to do what we're doing. Once we're plugged into the control plane, we create
entity graph. So we understand what is in the environment. And then we create a activity graph.
And these are the entities that are actually operating in the environment. The detection
comes in because we correlate the identity across boundaries. So, for example, if a user is coming in via SSO and accessing a SAS, IS, and PAS or PAS environment,
we're going to try to attribute that back to the ultimate identity at the SSO layer.
If a vendor is coming in and assuming a role in, say, AWS, we're going to attribute it to that vendor.
If a user is coming in via local access into any of those layers, we're going to attribute it to that. And that allows us to profile what behavior looks like normally for that identity,
what access looks like normally for that identity. And then once we're able to do that, we're also
able to correlate all activity kind of across that layer cake of technology back to that identity in
a construct we call session. And then we inspect that session for behavior that may be exposing you to risk,
specific matches against content that we have around what threat actors are doing in those environments,
or just anomalous activity based on our machine learning and deep learning modules.
Very high focus on reducing the overall alert volume,
because instead of inspecting things on an event
by event basis we're looking at the aggregate session of activity and again not just in the
one layer of the infrastructure or SaaS but across those environments. Yeah I mean it seems like this
is a direction that vendors are moving in now right which is to just do that piece which ties
it all together, which is,
you know, okay, this token over here, that's doing this thing. What's the point of origin?
Where's the authentication event that led to that token being created and then attributing
that particular, you know, token or whatever it is back to an authentication event and back to
an identity. So it sounds like that's a pretty key piece here. It is. And I think if you think historically, people would try to do this in
their SIEM and they'd be doing it after the fact, right? They know an account has been compromised,
a credential has been leaked, and now they're going to go reconstruct all this in the SIEM.
What we kind of what we were focused on four years ago was how do we pull that paradigm left
towards the events and activity that's
occurring and how do you do detection at scale with that construct which is non-trivial as you
know in the cloud is very very very noisy it also helps analysts right when you give them an alert
now they know what's the identity involved what is historical activity was historical access whether
alerts have they triggered in that environment what entitlements they have what entitlements
are they using is the use of those entitlements they have? What entitlements are they using?
Is the use of those entitlements irregular for that entity?
Things like that.
And giving that one place allows them to quickly triage the alerts we generate.
Well, I mean, that's the machine learning bit, right?
Like it's not too hard to profile these sort of identity, these sort of accounts, right?
Like it's actually not, which is kind of like why it surprised me that it's taken so many years
for people to kind of offer this
because we saw, I think,
and I've mentioned this on the show so many times,
but we saw Rapid7 had a product for Active Directory.
I can't remember what it's called,
but they had a product that did this for Active Directory
like a decade ago, longer.
And, you know, finally we're getting there in the cloud.
But I mean, I'd imagine that
the detections that you're doing on some of these are going to be really reliable detections,
right? When you've got like a, you know, machine to machine account that all of a sudden starts
doing stuff for the first time. I mean, that's going to stand out. Machine to machine, vendor
to vendor. I think those tend, vendor to environment, those tend to be easier to profile, even though
they're high volume.
People.
But people are weird, right?
So they're out of the profile because, yeah.
Both from an access perspective, especially with remote work now being so predominant,
but also just activity.
On a given day, I wake up and I do things different, right?
Machines are programmed to do certain things.
And that's the easy part.
I mean, like, look at anomaly detection on most of the modern IDPs.
They fire on like 20% of the human sessions that occur.
I get sessions from our IDP provider that are flagged as anomalous,
even though I work out of this office every single day.
We've had to do it.
It seems easy, but the human side definitely complicates it, right?
And then if you think about, look, the modern cloud,
the difference between an on-prem world, right?
What happened in AD?
I got assigned to a group,
but it was still my user assigned to that group,
and I was operating the environment.
Not too hard to attribute what Jason was doing
in an on-prem world.
In the cloud and SaaS world,
I am oftentimes coming in through single sign-on
and then via a SAML transaction or
credential assumption in some way, I'm operating with a credential in that environment. All my
activity is tied to that credential. That credential may be shared in some of our largest
customers with hundreds of concurrent users, like in the developers, SREs, and things like that.
And so it's very hard to profile access and anomaly on a shared role or credential.
You have to be able to tie it back to the individual user and you have to be able to join, you know, what is AWS, very voluminous cloud trail logs back to a singular entity.
That's Jason who came in via Okta, clicked on that AWS role, operated for 14 minutes, did these, you know, 17 or 70 things and be able to know that that's
normal or abnormal for Jason. That's non-trivial in the cloud. Yeah. Now, another thing that you're
doing is ISPM. I actually had an interview recently with the co-founder of Sparrow,
which was an ISPM company that was later acquired by Okta and that's now their ISPM product. So
I feel like I've, you know, recently spoken to someone now their ISPM product. So I feel like I've recently spoken to someone
about how ISPM works, but I mean,
the goal there is really to look for,
you know, accounts that are misconfigured,
accounts that shouldn't exist.
I think he gave the example of stuff
that was synced into ANTRA,
you know, accounts that are like 20 years old
that have been sitting in Active Directory
that gets synced into ANTRA with no MFA and whatnot. And you know, you really need an ISPM product to find
those things and MFA gaps and all that sort of stuff. So I'm guessing that's what you're doing
as well. Yeah, I mean, to your point, though, we jokingly say it's like bring your own adversary
when they move from on prem to the cloud, because oftentimes they bring in not just bad hygiene,
they bring bad actors into their cloud environments.
Well, look, four years ago, we started centrally focused on threat detection. We felt like that
was the hardest thing to do at scale across these environments. As we detected breaches,
our customers would often come back and say, okay, look, I need to remediate the cause of this
breach. And that inherently always comes back to posture.
It doesn't matter if we're talking about on-prem,
IoT, Edge, cloud, the very first thing you generally
wanna do is know what's running in the environment,
how it's configured, and if it's exposing you to risk.
So we started on the threat detection side,
and now over the last year and a half,
we've focused a lot on posture.
But posture's also important for threat detection.
If I see activity from Paul? If I see activity from
Paul and I see activity from Patrick, but Patrick is much more vulnerable, maybe has a set of toxic
combinations that'll lead to privilege escalation or large sets of entitlements, or doesn't have
MFA configured, even though the activity is the same, I'm going to point and rate the severity
of that alert much higher than say Paul's. I mean, obviously you should pay attention to Paul's activity, but you know, socks, right? They're doing trade-offs constantly
and they need to know that. So we, uh, we felt that the marriage of posture with runtime was
super important there. Um, and then subsequently as we started context, right, which is like,
this user is doing something funny. And it's also one of the ones that's like over-provisioned and
like a little bit, you know, difficult to secure.
So maybe we want to pay more attention to this one.
Yeah.
And then imagine how do you do proper posture and privilege analysis if you don't actually
see the atomic level activity for that entity in the environment, right?
You can have gross assumptions that a user comes in through their federated provider
and clicked on AWS.
So they're an AWS user.
Maybe you may say they used S3, but without examining and correlating
that, okay, Patrick does, you know, list bucket, but he has broad permissions to S3. It's really
hard to give, I think, really informed and intelligent, least privileged recommendations
to your customers. You have to combine the two. All right. So what sort of customers are buying
this solution? Because
I've spoken to others and they're like, well, this is mostly at the moment, you know, larger
organizations that are multi-cloud, they might be using more than one IDP and they just, they do not
have a hope, a snowflake's chance in hell of trying to do this themselves. And that seems to be where
this market is kicking off at the
moment. Has that been your experience as well? Look, I think early on what happened, nobody,
nobody believed that we needed this. To be honest, like nobody believed we needed this solution.
Three years ago, identity wasn't even in the top 10 for CISOs. Now it's like number two,
number one, arguably. We plugged in to environments that were breached. So once you got breached, they realized they had the CrowdStrikes of the world and they had the whizzes.
They had Sims.
They had all these other tools, but they were still getting breached to this identity vector.
And then the secondary profile that was buying us was breach adjacent, right?
Your neighbor just got popped by Scattered Spider, Luker 3.
Now you're worried about it.
And they were a good company.
They had all the right tools and
processes you respected them they still got breached what did they do they bought permiso
okay so now breach a date adjacent buying as the market's evolving we're starting to understand
more around proactive purchasing so i would say on the proactive side our customers tend to be
very mature on their risk assessment process. And they look at
overall, what are the set of controls I have for identity risk? Okay, I have single sign on here.
I understand my inventory on the machine side. I don't really have anything. I need to start
looking there. And they kind of quantify that risk annually versus like reacting to the market
or reacting to what's happening around them. And we're starting to see more interest in that way.
And how do I quantify that? Inbound RFIs, inbound inquiries,
less of us having to go out
and constantly educate the market
and the market's starting to kind of come to us.
A lot of times I think you accurately described it
as fairly complex environments,
fairly large, you know, Fortune 1000,
several, several thousand employees,
has a security operations center, is fairly sophisticated from a security governance perspective, multiple
IDPs.
Sometimes we see four, a customer IDP, multiple enterprise workforce IDPs.
They're both on-prem and in the cloud.
And in the cloud, they're multi-cloud.
And then anywhere from, you know, 50 to 70 SaaS applications deployed, and they can't answer, snowball, chance in hell. Yeah,
it's probably a good analogy there. All right. Well, Jason Martin, thank you very much for
walking us through Permisso. It's great to get an update on what's happening in the identity space,
which is, you know, truly a category that's kicking off at the moment. All really interesting stuff. Thanks again. Thanks for having me, Patrick. Appreciate it.
That was Jason Martin from Permisso there with a pitch for their identity security solution,
and you can find them at permisso.io, P-E-R-M-I-S-S-O dot I-O. It is time for our third
and final snake oiler today, Wiz.
Now, you know, Wiz is a relatively new company being founded in 2020, but it's grown rapidly to become worth roughly $11TG and Wiz's rise, I guess, you know, it can be best described as meteoric.
But, you know, I've never actually been that clear on what it is they actually do. So when they reached out and wanted to book a snake oiler segment, I thought this was an excellent opportunity to learn something.
Jung Lu is a senior director of product marketing at Wiz and she joined me to give us the condensed
pitch for the overall Wiz platform and to have a more detailed conversation about the code and
the code security, excuse me, Secrets Discovery product that they've just
launched. So here she is. So the way to think about Wiz is we are a cloud security platform
that helps our customers gain full understanding of everything that is running in their cloud
environment. And then we help them to prioritize the most critical risks. So not only looking at
vulnerabilities, which an organization might have millions of, not only looking at vulnerabilities, which an organization might have
millions of, not only looking at misconfigurations, which they also might have millions of,
and not only looking at things like sensitive data, which might be all over their entire
environment, but really helping organizations uncover the full attack paths into their cloud
environment that if exploited would really lead to significant business impact.
So imagine things like a vulnerable container that is publicly exposed to the internet with a known exploit
that then has excessive permissions that would allow an attacker to move laterally in the environment
and find things like sensitive data within the environment.
And then from there, what we help organizations to do is from that very prioritized risk,
then we help organizations to remediate it rapidly.
And we do this by actually democratizing security out to the teams that are responsible for
that part of the infrastructure, right?
Because if security finds an issue in cloud,
they oftentimes are not able to actually take the action. You need to find the part of the business
that owns that infrastructure to be able to remediate it. So we send those directly out to
our customers or directly out to those application teams. And from there, they take the right action
and they're able to remove that critical issue and proactively reduce the attack surface in cloud.
So basically, you've got volume scanning, you've got misconfiguration scanning, you've
got an understanding of permissions, and you've got an understanding of what's available.
So you've sort of got elements of chasm, you've got elements of volume scanning, elements
of configuration scanning and whatnot, and all into one.
That's the platform pitch, right?
Exactly.
It's all of the risks layered on top of the actual graphical understanding of your cloud environment.
Okay, right.
So that is the core platform.
Now you are making this push to be like a multi-product company. One area that you're pushing into,
and this is interesting because one area you're pushing into is the code scanning part of it,
right? So looking for vulnerability, you know, shifting left and trying to identify vulnerabilities
before they're deployed. Why would someone want to use Wiz's version of this when this is already
an existing market with existing tools? You know, what's the advantage of trying to use Wiz's version of this when this is already an existing market with existing tools? You know,
what's the advantage of trying to use a tool that's created by a cloud security platform like
Wiz? Yeah, so I think taking a step back first, I think there's really two sort of technology stacks
that we've had in place as we think about this landscape, right? There's been the cloud security
tech stack, which looks at what's actually running in the environment. And then oftentimes you have then
the application security stack that you're using to have a better understanding of the application,
a better understanding of the code, as well as the entire pipeline that builds the cloud
environment. And they're oftentimes not only silo technologies, but run by different teams as well.
And there's not enough communication or collaboration between them.
And so as we think about cloud, right, we have amazing context of what's truly important in the environment.
So we have a great way of prioritizing.
However, actually remediating is harder because you have to trace it back to the source.
You have to trace it back to those teams.
And now you're much further removed from the developer actually in the act of coding.
On the flip side, on the AppSec side, you have that amazing feedback loop with developers,
right? You're finding things early, you can bring it back to them before it truly impacts the
environment. But at the same time, it's difficult then to prioritize because if you say
scan your code repository for secrets, you'll find many of them in there. But you don't know
which ones are truly important to fix because you lack the context of what's actually running
in the cloud environment. So it's difficult to say this particular secret, if you expose it,
this will lead to an actual admin role in your cloud
environment. And this admin role then has access to sensitive data that is in your cloud as well.
So it's difficult to track that entire attack path. So what we're trying to do with Wiz is
really drive that linkage, right, and actually drive the convergence of these two different
stacks. So it's not any more about a set of risk scanners that run in code
than a completely set of scanners that run in cloud,
but really the same set and really the ability to write the same policies
that enforce what good looks like in your organization.
Well, I guess it's about, you know, if you've got that unified,
you know, because Wiz is well known as like the cloud dashboard company.
And I guess being able to light up a dashboard with some sense of context and prioritization around some security problem that's been committed into code, you know, you're kind of taking, you know, you're notifying people outside of the dev team at that point, right?
People who might be more empowered to act, shall we say?
So I think it's both, right? People who might be more empowered to act, shall we say? So I think it's both, right? I think we're increasingly seeing developers themselves
want to take the right action, just oftentimes lacking the tools in order to do it. So we
actually, when we think about the experience for code, we build it for the security team itself,
which of course wants the UI. It's an extension of the interface that they already know.
But then when it comes to developers, it's actually more about meeting them in their particular workflows. So let's take that
example of an exposed secret that leads to a identity in the cloud. We actually want to provide
that feedback directly to the developer as early as we can. And that's within the IDE itself. So we
can say, hey, this line of code that you just wrote, if you deployed
it, it would lead to a secret or lead to access in the cloud environment. So hang on, hang on,
just let me, sorry to cut your flow there. But that's an interesting idea. Because it's one
thing, isn't it to put an alert in front of a developer that says you're about to do something
bad, versus you're about to do something bad. And here are the precise consequences of that. I mean,
it seems like that's what you're trying to get out there. Exactly. I think the problem has never been
that we have information for people. In fact, I think we have too much information, right?
It's really the context of what truly matters that has been lacking. And we really think that
because we have the cloud context for what's actually there in cloud, we can bring that back to developers and it gives them the confidence
to make changes. Because otherwise, if you just say, oh, you exposed a bunch of secrets,
like what am I really supposed to do with that? Now, you've talked a lot about exposed secrets.
What other sorts of issues will this, you know, code scanning product from
Wizz identify? Is it going to pick up on all your usual sort of bad developer issues where, you know,
you whack them on the nose with a rolled up newspaper, that sort of thing? Well, we definitely
don't want to do that, right? We're trying to improve the collaboration between developers
and security teams. But the best way to think about it is really every single scanner
that we run in Wiz for cloud, we're extending into code as well. So looking for sensitive data,
we can look for it now within the repos, vulnerabilities, same thing. And actually,
the other thing that's really interesting is the extension of what cloud security posture
management even looks like. Because as you have all of these tools, like let's take GitHub,
for example, there are a lot of different configurations that you can make to GitHub itself. And so we're
actually also extending our configuration checks into the tools that you're using within your
pipeline. And I want to give one more example, which I think is really interesting. And hopefully
it's not too much PTSD for folks on this call, but I think many of us remember
the Log4J incident, right?
And so organizations were immediately looking for where do I have Log4J in my environment?
And because of the cloud now back to code traceability, we can actually help organizations
not only find which ones exist in their environment, but then also trace it all the way back to
the source.
So let's say I found log4j running in a container image that I have running in my cloud.
I can actually then trace that container image back to the registry.
I can trace it back to the Docker file.
I can trace it back to even the commit. And what Wiz can do now is
actually create a one-click pull request that gets it back to that repo where we found that it was
introduced and gives the opportunity for a developer to say, hey, yes, I want to fix this.
I want to patch this and then click that merge button. Now, it seems like it might be too early.
We had this conversation before we got recording,
but it might be too early to call this like a market consensus
on how this sort of stuff should be done.
But it's probably worth noting that some of the companies
that are better known for this sort of code scanning stuff,
they're starting to sort of build out towards,
they're starting to build right as you build left.
So it seems like
there is a view forming that in order to make this stuff as useful as possible, it needs some sort of
visibility beyond just the point that code is committed, right?
Exactly. I think that's really the key is the context, right? And giving that same context
to everyone so that they're able to take the right
actions. So I think, you know, some folks, as you mentioned, are going left to right,
we're doing what we call middle out, which is from the center of what's actually running,
and then shifting that left. And I think ultimately, what we hear is the practitioners,
but also CISOs just saying like, hey, it doesn't actually make sense anymore to run all of these very different or run the same set of risk assessments in two completely separate places and have two
different teams then taking action on them, right? It's just not an efficient workflow for anyone
that's involved in cloud. And it's actually slowing down our ability to move in cloud.
Now, we've got a couple of minutes left. So I do want
to briefly touch on another product that you've launched, which is more about infrastructure
monitoring. So this is something that I believe what it started off monitoring Kubernetes stuff,
but now it's like, it can be across different technologies and whatever. And same thing again,
trying to have that unified view of actually what's going on, you know, detecting
lateral movement, things like that, you know, weird, funky processes spinning up where they
shouldn't be. That's the basic idea. Yes, exactly. So in cloud, you can't remove every single risk.
So you do need to have that real time threat monitoring in place. And I think what's also
interesting about cloud and sort of why all of these products make sense together is the fact that whether you
find a risk in the cloud environment that you want to remediate quickly, or you contain an incident
in your cloud, and you want to then find the root cause of it so that you're able to remove that
entire class of issues in the future, it all comes back to really understanding and being able to
correlate from the code onwards,
right? Because that's how we're going to actually continuously improve the entire operating model
and make sure that we're not just playing whack-a-mole in the production environment,
but rather really helping our organization to remove these classes of issues.
All right. Well, Zhang Lu, thank you so much for joining me to walk me through all things Wizz.
I think, as I said at the intro there, it's a company that obviously we all know about, but I haven't known enough about it, really. So it's been great to get a bit of an education. A pleasure to have you. Thanks.
Thank you so much, Patrick. Jung Lu there from Whiz and you can find them at whiz.io and that is it for this edition of the
Snake Oilers podcast. I do hope you enjoyed it. I'll be back soon with more risky biz for you all
but until then I've been Patrick Gray. Thanks for listening and watching.