Risky Business - Soap Box: Making security tech more people friendly
Episode Date: August 12, 2024In this sponsored Soap Box edition of the show we talk to Proofpoint’s Chief Strategy Officer Ryan Kalember about making security tech more people centric. We often t...alk about how we can use signals from users to drive some of our security tech. But what about using our security tech to drive user behaviour? Ryan thinks there are some opportunities here, particularly around identity security.
Transcript
Discussion (0)
Hi everyone and welcome to this Soapbox edition of the Risky Business Podcast.
My name is Patrick Gray.
For those of you who are unfamiliar, these Risky Business Soapbox editions are wholly
sponsored and that means everyone you hear in one of these paid to be here.
And today we are going to speak with Ryan Culliver who is the executive, no, you've
had a title change, haven't you?
You're not the EVP of Cybersecurity Strategy. You are the...
Chief Strategy Officer.
Chief Strategy Officer.
There we go.
It's a way to say the same thing with fewer words.
We always appreciate that.
So, yes, Ryan is joining me now.
He's the Chief Strategy Officer for Proofpoint,
which is one of the largest companies in the security space,
does an ungodly amount of revenue,
has a zillion customers,
best known probably for its email security. But over the years, Proofpoint has kind of
diversified and does all sorts of things now, including a thriving DLP business.
But Ryan, the reason you're joining me today is to talk about how security technology broadly should really do a better job of interfacing with
people right for a few reasons first of all because people are an excellent source of information
for security technology uh you know because they can do things like report weird things answer
questions and whatever but also you know what you're going to posit is that security technology should be a really good source of information for people as well.
And that's not really how things have been engineered traditionally.
So, yeah, let's kick it off.
Yeah, I think my relationship with the security technology that protects me every day is sadly kind of the same as it's been for decades, right? I will occasionally see our
VPN product, or I guess I should call it a zero trust network access product. Our EDR product is
frankly no different in terms of how it affects my personal life as the one I was using two decades
ago. And ultimately, I don't really learn anything from those security technologies, even if I do
something wrong. And so ultimately, I do think that if we are going to continually be faced with stats that say
Verizon DBIR every single year, human element, central feature of the vast majority of things
that go wrong, why don't we actually change the user experience of security tools that do in fact
interact with the end user to both make the tool
smarter and make the people smarter at the same time. I mean, I think this is one of those things
that like immediately, you know, there'd be people listening or watching this who would be saying,
yes, a hundred percent, you know, right on. The problem is that's hard, right? Because when you're
trying to communicate anything, even halfway technical to a user, like let's start with that
example, right? Of trying to uplift users.
You know, I just think back to every error message I've ever seen on a computer, right? Not just
security stuff. And they, most of the time they barely make sense. You know, an unexpected error
occurred. It's like, gee, you know, thanks. That's, that's fantastic. Or they might maybe give you an
error code that you can Google or whatever, but you know, it's rare, even in general computing, when you get some sort of error or message out
of a computer that it makes sense to even people who are in technology, let alone end users.
But I guess this is kind of reinforcing your point, which is maybe, maybe this is something
we should fix. Absolutely. I think a lot more people know 404 than 403 of the normies out there.
But I think the more interesting framing here is just sort of, if you're a cybersecurity organization, and you're running a whole set of tools, you're doing all this work to protect
these people day in and day out.
And your interaction with them is training that you assign them once every six months,
maybe once every year.
Maybe you occasionally try and send a simulated fish or do the fire drill style of assessment
of fishing readiness or fishing education.
And that's the only thing they ever see from you.
Apart from, you know, maybe you speak on the all hands every once in a while, you're missing an opportunity, not only to give people real time feedback in terms of what they're
doing that can potentially create risk, but also your own brand with every single person
that you're supposed to protect is going to come across fairly weird, right?
It's not the sort of thing that should be, I think, the face of security to the typical user.
Yeah. I mean, it just sort of reinforces the idea that technology staff are sort of basement
dwelling weirdos, right? Who just surface every now and then to attack you with a simulated fish
and, you know, ding you when you click on the link and then disappear again for another six
months. Probably not the way. Exactly. And then the corollary of that
is that when there are cybersecurity people whose goal it is to focus on the human element,
yeah, there's some really cool social engineering pen testers out there that do get a lot of
respect. I would say the typical security awareness professional does not get that same
level of respect because they are working on
issues that are squishier, that are in between the keyboard and the chair, and that don't really have
the support of the vast majority of security tools that frankly do touch the end user and could think
about that relationship differently. Well, I'm guessing that's where we're going with this
conversation is looking at how the actual technology, the software that we use, the tools that we use
could actually play a role, right? Absolutely. And I think we're obviously in a position where
our entire lifeblood, what we're trying to secure is human collaboration, right? Sometimes that's
malicious human collaboration, but it's human
collaboration. You know, email is an amazing place to obviously launch an attack. It's an amazing
place for a user though, to tell you what they do and do not understand about the potential security
risk of something that they encounter. And if you put email together with web, you start getting to pretty much every place a user can one get in trouble,
or to learn something. Yes, on the DLP side of things, we do some nice little user nudges are
like, hey, you just plugged in a USB we've never seen before. It's not how we transfer files around
here. Obviously, you can get some of those long tail things too. But when it comes to how users
do their jobs day in and day out, get attacked and potentially expose data, it's pretty much
all email on the web. I mean, I think that's one area where the DLP stuff I've seen do that sort
of nudging. It's the one area where I think, hey, that's kind of useful. You know, where you can see
people doing risky stuff, you know, this might be through a
browser plugin or something, you know, see them uploading huge zips to Dropbox or whatever and
say, hey, excuse me, probably not, probably not really in line with corporate IT policies there,
you know? Yeah. And that along with, I think, generative AI web front ends has been the killer use case for us so far. It's certainly where we've seen a ton of adoption, right? It's one thing to give somebody a training module on correct use of generative AI where, yeah, sure, you can use ChatGPT to help you with travel plans or search for a recipe or even maybe rewrite something that's not sensitive, but please don't use it to debug your code, right?
There's a variety of things on the spectrum that, again, a web plugin is a fantastic place
to actually do that for, but it's also a way to learn about kind of how that human is using
technology and potentially introducing risk.
And the vast majority of the time, I think you also end up doing a bit of case deflection,
right?
That's not a DLP alert that somebody in the SOC has to handle.
You should be able to get the user back onto the garden path in a relatively straightforward
way, especially if you explain it in a way that that user can understand, and then basically
register that, assign that to that user's level of risk, and move on.
So there's a couple of obvious examples there.
Do more obvious examples present themselves as you sort of sit down and think about this,
or do you need to get a little bit craftier in your thinking when you're thinking about
how to uplift users, train them, teach them when it comes to other types of security technology?
Because I'm just wondering how a CrowdStrike or an EDR suite or something like that is going to get to a
position where it's sort of uplifting users instead of just annoying them. And indeed,
before we got recording, you know, you mentioned to me that there's a fine line between being
helpful or being clippy, right? And no one wants to be clippy.
Although maybe weirdly in 2024, clippy seems cool again. So I don't know, maybe somebody wants to be Clippy. Although maybe weirdly in 2024, Clippy seems cool again.
So I don't know.
Maybe somebody wants to be Clippy.
When Clippy's being driven by an LLM that needs three nuclear power plants to run it,
I guess Clippy might be a little bit more useful than it used to be.
But yeah, how do you construct something useful when we get into more of those harder security more of those harder security use cases, as opposed to
like user behavior use cases around things like DLP. Yeah, I think some are absolute no brainers.
One is something that has been very popular in the consumer browser world for a long time, which is
around kind of password reuse, right? Being able to recognize when the corporate credentials being
entered into somewhere it shouldn't be, it's not the IDP.
Incredibly useful control, regardless of whether you even know what that site is, whether they're reusing it or maybe it's a phish kit that has never existed before that that actor crafted just for that particular targeted attack.
That's an incredibly smart way to point out to the end user that they're doing something that's creating risk, even if you don't fully understand the root cause of it. The other ones that really leap to mind are very much
around kind of the creation of accounts. Like there's a new generation, I think, of thinking
around shadow IT or what we used to call shadow IT. Gartner would have you rename it business-led IT,
which I think is actually somewhat dangerous territory that makes me a bit nervous.
But identity has become really, really fragmented across all of the different things that link back
to your corporate credential that you can create as an end user with access to a web browser.
So the other one that actually is fairly interesting
is understanding, okay, Ryan lives in,
sure, Azure AD or Entra ID,
but I also live in Okta.
I also live in five or six different
corporate approved web apps,
most of which are SSO, a couple of which are not.
And then maybe I've created a bunch of things with my corporate credential that live other places.
And if I get compromised, one, you've got to clean all that up, and that's a non-trivial undertaking.
And two, there's all kinds of posture things that can actually be fixed that way.
We're pretty far from your CrowdStrike example, but I do also think we're pretty far from Clippy
in terms of that being useful to a security team in an account takeover scenario or really even in an improvement of posture.
Well, it's interesting what you say, right?
Because I'm advising, as you know, a company that does a lot of work around those shadow
SaaS issues.
And it often feels to me that it feels a little bit similar in some ways. Some of these more modern sort of identity
and shadow SaaS tools feel a little bit DLP-like in some ways.
Like it's not DLP, but also when you're trying to stop people
from like signing up to accounts with non-corporate credentials
from their work machines and whatever,
that almost feels like a bit of a DLP use case
when actually it sort of sits somewhere more maybe
around ITDR, identity threat detection response.
Like this is a whole new category, I guess,
is what I'm saying.
And it's one that I think it's one of those things
that went from being a theoretical problem
for a lot of people.
So CISOs weren't spending money on it.
And then all of a sudden we're starting to see issues, right?
Like the snowflake, the big snowflake disaster,
which has whacked all sorts of global brands from AT&T to Santander Bank,
you know, yeah.
So I think it's gone from being something that's like theoretical
to CISOs being yelled at by their boards,
asking what their exposure is to similar issues.
They're sort of scrambling for solutions.
And that's what they're starting to look at, which is these new generation of sort of web
browser plugins that can steer users away from doing the wrong thing.
Absolutely. And I think you're absolutely right that first, Snowflake was a watershed moment here.
And second, the SaaS application and the data loss scenario is somewhat downstream of the
identity risk.
And that I think is almost always going to be the case. The one part that is also interesting,
again, I recognize I'm saying this as an executive at Proofpoint, the other place you get to find a
lot of that stuff is in email, right? In email, you find all of the signup emails, you find the
password reset emails, you find the account activation emails, you find all of the signup emails, you find the password reset emails, you find the account activation emails, you find all of those sorts of notifications that tell you that this person actually has their corporate credential connected to this service.
And at scale, it's incredible.
But it's when they're not using corporate credentials, that's when you lose that visibility, right?
And, you know, for a long time, like, I mean, a decade ago, we used to call this the Dropbox
problem.
Yeah.
And then what happened?
People just forgot that the Dropbox problem existed.
And now the Dropbox problem has become the Snowflake problem and, you know, all of these
other services, right?
Well, I think what happened is the CASB market exploded and part of it fragmented off into
the SASE slash SSE.
Sorry, I'm going to have to do some penance later for actually uttering those acronyms,
which again, like it did collapse with the traditional notion of web security and secure
web gateway for a reason. But we forgot about the utility of understanding what users were doing and
the data they were putting in those places, right? Which a secure web gateway is maybe not optimal
to solve for in the vast majority of cases. Again, I don't have any particular religion
about the architecture that you use to understand what a user is doing across all these places.
It's just really simple. I do actually. And I think the browser is the right place to do
it. Like, I think it's just, it's unquestionably the right place to do it. Cause you know, again,
going back to those, that 10 year old conversation about like the Dropbox problem, um, you know,
people were trying to do it with like network based solutions and all sorts of stuff and then
break and inspect and pulling stuff out. And it was just done. Yeah, exactly. So, so it really
is like the browser seems to be the right place to do it i mean i think you know proof point as a major vendor is one of the one
of the first to kind of embrace that but now it's like it's becoming an industry trend it's not just
you anymore yeah i i think the browser is absolutely the widest net that you can cast and it is easily
the one that makes the most sense architecturally i I will just say, we also like using email for that because it gives you a massive headstart on understanding all of what's actually out there.
Because again, most of that, if it's been being done with the corporate credential,
does register in email. Longer term though- Well, and it's good for finding stuff that's
dormant as well, right? Stuff that might have been spun up a while ago and is no longer used.
And you're not going to catch that with a browser extension if the user's not logging in.
Exactly.
And a lot of that, though, to your point around dormancy,
is where you can't rely on the user nudge to go do something because it's already been done, right?
Yeah.
It's already out there and you have to decide whether you need to do something about it.
They uploaded a bunch of test data into Snowflake and then forgot about it.
Yes, exactly.
And that's where it is nice to be able to go back in time a little bit on the email
side of things to at least start to get that under control.
And then you hope that over time, either they generate new browser activity or they have
to reset a credential or something, maybe a password expires, maybe they get a notification
of some sort of activity that hopefully is not malicious that then alerts you to the fact that this thing exists.
The other part that I think is actually noteworthy and interesting, when we're really looking
at kind of, okay, what's this future of human risk look like?
It should, generally speaking, be about positive reinforcement.
Because if you are going to, again, be a a form of clippy it's got to be useful
to the end user and it's got to come across as something that is a valuable service that you're
doing to protect them as opposed to a nanny or something that is scolding you or speaking a
language you don't understand to to your point earlier around the verbose error messages so
you have i mean verbose verbose is one issue.
The other one is, you know, unexpected error.
Unintelligible.
Terminating.
Okay, thanks.
That too.
Yes, non-verbose error messages can also be bad.
We should certainly give them credit for that.
But overall, it really is a tonal thing, right?
And a lot of organizations, again,
they miss an opportunity to
orient their users towards helping reduce that security risk over time and improve that posture.
And I think you're right, there's no better place to do that than the web browser. But
ultimately, it's about understanding really just how users create identity sprawl,
what data they are putting into those applications,
vast majority of which happens through web applications. Some of it doesn't, but the vast
majority does. And downstream of that, you probably also want to know what SaaS applications people are
using and paying for that are not actually under the corporate umbrella, which is a nice thing for
the CIO to be aware of. But
that's an ancillary benefit to something that ultimately I think puts a completely new face
on InfoSec to the broader company if you do it right. What do you think the benefit there when
it comes to the harder stuff? You know, what's the real benefit to the security team? I mean,
I can understand there's a benefit in that, you know, if you're teaching your users more about
like how all of this stuff works, that's positive. Is that the benefit to the security team?
I think it is. And the hardest user population to get to, you know, we have our stereotypes around
salespeople or lots of other people or who, you know, your job is in customer service and you're
supposed to click on that link that comes in.
We have our stereotypes there.
But IT people consistently create massive amounts of risk in ways that I think security
could do a lot to ameliorate.
And so if you are actually showing that your tools are smart, they recognize these
sorts of things. And then they can translate that back into a yes, no, I don't know, for the user to
say, is this activity that you're trying to do, not trying to do, or you have no idea, then that
gives them a little bit more respect on the IT side of things. Going back to that point around,
you know, awareness training being run by groups that are frankly not
at the top of the hierarchy, or at least the respect hierarchy in most infosec teams,
security training has largely been, again, that episodic thing that people try and get through as
fast as possible. And it's not contextual to the thing they just did so it isn't teaching them Something interesting and I think the the harder
Controls if you will could be one of those really interesting places to teach somebody
About the types of binaries that are actually used in malware these days if you're running a dot.js file
You know you're either in the minority of users who have a good reason to do that and understand what a.js file does,
or you're not. If you're doing the same with a PowerShell script, which we just saw some fairly
interesting social engineering where the browser pop-up on the malicious site basically gave you
a bit of PowerShell and explained how you would go run that, which is a clever bit of social
engineering to, again, try and get around all sorts of
controls, if you're then able to explain a little bit more about that to the end user,
you're just completely reframing that interaction to one where even if you block it, which is the
ideal circumstance, of course, the user knows nothing about it, and they're no less likely
to fall for it the next time. I mean, I remember once in one of our previous conversations around the DLP stuff, one of the most valuable things that a lot of these DLP
suites do is when someone does something they know they shouldn't be doing, when you throw
them a banner saying, hey, you're doing something that we don't think you should be doing, and they
already know that, now they kind of know you're watching and it calms
them down. But I, but I wanted to ask you another question, right? So previously we spoke about your
like Nexus people risk explorer product, very cool stuff. We did a YouTube demo on that as well. So
people can check out the risky business media YouTube page and, and find the demo. You know,
the idea is you can, you can build profiles. You can sort of have Venn
diagrams of, you know, people who are very attacked, people who are a little bit click
happy. You know, you can do this sort of profiling and then determine if certain people need stricter
policies or additional controls or whatever. I'd imagine also that this is one area where
you can build richer profiles, right? Like if you're collecting information from EDR,
well, this person loves to try to run random stuff
on their endpoint.
Like that would be valuable information
to plug into one of these user profiling models.
Is that something that like you're thinking about?
Is that something on your roadmap?
Like, and you know, how do you as Proofpoint,
you know, even begin to think about integrating signals from something like an EDR suite or a network suite or
whatever into your models? Yeah, it's a really good question.
Where we've mostly done the work there is actually with organizations like CyberArk
that provide views of endpoint privileges, really, it's kind of an
unfortunate term, because it's actually kind of close to allow listing from your beloved friends
over at over at Airlock. But ultimately, if you kind of understand the things that should run and
should not run from whatever source, you know, what we've tended to pass across is users are
getting attacked with these sorts of binaries.
And that is useful to know because you can prioritize it in terms of what should and
should not run on those endpoints. To your point, though, this has been a huge boon for our people
risk models because it's a much broader spectrum of vulnerable, risky type activity, right? The
credential reuse side, creating identity sprawl, I think is maybe the
least bad way to phrase that, as well as actually all of the different data that can go to those
applications if you're not blocking that activity. And then finally, connecting that to a much
broader view of privilege, right, the first big addition that we had to that was the Active Directory attack paths.
But if you're really taking a holistic look at this as a person, what does an attacker actually
get if they compromise person X? Sure, they get attack paths if they happen to be on that box with
that credential. But what do they get from all of those other SaaS applications, all of the other places that that identity manifests itself?
I talked to a security team recently from a very, very, very large organization, hundreds of thousands of people.
And they said, actually, they're now spending a majority of their time on incident response, basically tracking down all of the things that they need to reset.
All those sessions they need to kill
that are connected to that user, because there's no single place that that user exists.
And that's a really fascinating thing to be able to get a little more scientific around.
And it's a huge part of what we're working on. I mean, I don't know if you caught it,
but I did an interview with the chief architect at Okta a while ago, who basically came out and
was begging CISOs to put in their procurement docs
that the SaaS apps they were signing up for needed to implement like universal logout, right?
So that when you want to crush a user's sessions via the IDP, that just happens. And that's not
something that SaaS providers are actually implementing, which drives them nuts because they've done their job. Right. So I, I get it that, you know, trying to go and crush
like sessions once, once a user's identity has been compromised or, you know, and from there,
they've compromised other parts of their identity or roots of trust of that identity. And they're,
they're all of a sudden everywhere and you can't just like push a button and crush those sessions.
It's, it's a horrible problem. Horrible. Exactly. And it's sort of a three faceted
problem, because you have just killing it in Microsoft land, which is non trivial,
it's doable, but it's non trivial, and it changes all the time. Then whatever your IDP is, and Okta
does a wonderful job with what they know about, even though obviously they could do better if more
SaaS applications were playing by the rules. And then there's the things that aren't even in Okta
and are not connected to Okta, where that's something that you just have to mass password
reset, all of those things, which is, again, a hugely manual exercise right now. And it's also an inexact one, because
these SOC teams that I was talking to, they're just going through whatever version of web logs
they have, because again, they're not doing this with the browser extension yet, which,
as you pointed out earlier, is the easier way to do it, to find what might exist, and therefore
try and kick off a reset, when that user could have solved that
problem and at various different points in time either when they're the last time they interacted
with that application when they didn't configure mfa on that application when they created an
account ages ago and didn't connect it to octa, there would have been many, many opportunities to preempt that risk.
But until we're pulling the end user into that workflow, it's all going to be cleaning up the
messes after the fact. It's funny, right? Because we started off this conversation talking about,
you know, what could be the use cases for doing a better job interfacing with users.
And okay, we've talked about a couple of ways, you know, EDR might, you know,
you might be able to plumb that
into some sort of user risk model or whatever.
But what we keep coming back down to
are these identity use cases
where we can grab the telemetry
and interface with the user through the browser.
And indeed, there are multiple startups now
making enterprise-focused browsers.
Island is one. They're a sponsor also of Risky
Biz. I think they're a fascinating company, actually. They're absolutely killing it at the
moment in terms of being rip and replace for VDI and for Citrix and things like that. That seems to
be really where just people can't get enough. But then you've got the other companies like
yourself. I mean, you acquired a company that enabled you to get into the browser via an extension. We also have,
yeah, numerous startups. Some of them I'm working with who do this as well. It really feels like
this is the new product category, right? This is the new technological approach to solving some
pretty serious security challenges. I mean, do you think this is, what's the market equivalent of a thought bubble?
You know, do you think this is like some sort of thought bubble or do you think this will
be enduring?
I mean, I personally think it'll be enduring.
I think this stuff is going to be everywhere in five years.
Yeah, I think it's a control that everybody will have to have regardless of, you know,
what they buy it alongside or, or whether
it gets baked into other things. Yeah, totally agree on that point. And when it comes to the
architecture here, you know, obviously, everyone has looked at the success of island and talent
and others and thought, well, should we build an enterprise browser? Lots of our friendly companies
that we're integrated with are also building enterprise browsers. In the end, we just wanted to meet where the user was. So we wanted to do that in as many
ways as possible. And when we really decided to take the plunge here is when we were trying to
help a bunch of our customers recover from smishing attacks. Smishing attacks are the sorts of ones
where we will probably actually understand every single piece of telemetry required to stop that attack,
but we may or may not be anywhere in between the user and the payload.
In one particular case, this is a European retailer,
their store associates were getting smished with extremely well-crafted,
custom kind of multi-stage grab the MFA token fishing attacks
that only opened on iOS, which was pretty fascinating. And not only did we know about
the existence of the phishing sites, we even knew about a couple of other things that had been sent
in the same campaign because we have a business unit that actually protects telcos and sees a lot of smishing, but it's not connected to the enterprise side of the
world. This one, though, was a really infuriating threat actor. They're covered by lots of different
names. They're probably out of Morocco. And they, once they're in, connected to the IDP,
enumerated every single thing they had access to. And of course, at the end of the day, because this is 2024, printed themselves gift cards
or reassigned actually a ton of gift cards that they are that were already in existence
so that they actually sold millions of dollars doing this.
But nothing was particularly complicated.
Finding the mobile number of a store associate is trivial.
So that is the other thing that I think is really
fascinating here. It's a way to cover all those scenarios of your non-typical users that are not
sitting there on the corporate network with a mostly managed device that has a whole set of
controls on it. If your users are literally just in possession of an iPhone that they use every
once in a while to log into your identity provider to get email or your workday or HRMS style application, there's a great control point
that we can introduce here that helps also deal with that form of risk. So bringing the same URLs
that we already knew were bad and that intelligence to all of the other places between the user and
the payload, I think is the other reason to do this. Well, hang on, hang on hang on i just want to stop you there because like how are you getting in the middle of this
smishing attack right if someone's sending it directly to me the user my iphone like how how
are you able to interdict that because you can't run a browser plug-in on an ios device you know
like what are we doing on mobile mobile Safari, you actually can't.
Oh, really? I did not know that. But I think you mentioned mobile Chrome, not so much.
Mobile Chrome, not so much. Obviously, the team at Google is working on an enterprise browser and
some other exciting things. They definitely want to work with the ecosystem here. But the other
way to do it, and this is frankly not really on the menu for most organizations,
but it is for a few.
If organizations know the phone numbers that belong to all their employees, maybe they are a telco.
We actually do have a really, really good way to stop smishing.
But that goes through kind of telco gateways.
So you're right.
It's not a fully solvable problem, but there is still that bottleneck of the browser.
It doesn't matter
whether it's WhatsApp or iMessage or all these end-to-end encrypted apps that an attacker can
use to get a URL in front of the user. It's going to bottleneck at the browser. Yeah. Yeah. A hundred
percent. I mean, I would think too, that there's probably some room for the IDPs to do something
there, which is if this is a mobile login, you can get your email, but you can't get this, this, this, or this, you know, surely there's a way for you to do
granular control there. It is absolutely one of those controls that you probably wish you had
when you're in the middle of an incident response about, you know, millions of dollars of gift
cards getting stolen. But yes, you're absolutely right. There are a variety of ways to solve the
problem. I do think though, that being between the user and the attack, which, again, is being
in the web browser, being an email, wherever you can, is a really robust and scalable way
to do the same sort of things that we've focused on on the email channel for a long time.
Yeah, yeah.
Well, I think this has been an interesting conversation ryan because i think where we've landed is that probably a lot of this identity and browser-based stuff is where you're going to
have the best opportunity to better and more meaningfully connect with users uplift them
query them to provide better information to security teams just get that communication
going a bit better because i think also identities log things like that, they're things that are a little bit better understood
by the average user as well, as opposed to, you know, attempted executions or weird network
traffic. Absolutely. And I'll just add one more thing. This is also the future of security
awareness. If you are doing security awareness in 15 second little blurbs that happen in context that line up with
something a user just did, you have a good chance at actually changing behavior. You have a good
chance of taking the user that you talked about earlier that knows what they're doing is risky,
but does it anyway, because it's the water flowing downhill way to accomplish a particular task, that will change. And that is a
huge opportunity to rethink the episodic, you know, boring training plus fishing simulation
as a way to understand the human side of risk, but everything else absolutely spot on.
All right, Ryan Callenber, thank you so much for joining me for another Soapbox edition. It's
always great to chat to you, my friend.
And onwards and onwards we go, hopefully to a future where I guess our security stack doesn't patronize users,
doesn't confuse users, maybe tries to treat them as an asset, you know, treat them as people instead of cattle.
It reminds me of, you know, the term in aviation for passengers is they call them
self-loading cargo. And I think this is a bit of an attitude that we've replicated in security
that's not entirely healthy, but great conversations always, mate. Great to see you.
I'll talk to you again soon. Likewise. A pleasure, Pat. Субтитры подогнал «Симон»!