The Infra Pod - Will AI agents automate security in the enterprise? Chat with Mukund (Senior Director of Product Security at Chime)
Episode Date: November 4, 2024Ian and Tim sat down with Mukund (Senior Director of Product Security at Chime) to talk about his history to building security teams and projects, and how he views the industry when it comes to buildi...ng a product security culture, evaluating buy vs build, and how AI is already changing the security practice.
Transcript
Discussion (0)
Welcome to the pod. who is currently the Senior Director, I believe, of Security Engineering at Chime. Tell us a little about yourself.
Yeah, Mukund here. I run Product Security and Security Engineering for Chime.
So all things AppSec, CloudSec, Data Security, InfraSec, all this kind of stuff.
I kind of think of myself as a generalist.
I can pretty much pick up what needs to be done and then go do that.
I've been here at Chime for about three and a half years, close to four years,
was at Career Karma for five years prior to that, doing similar stuff there too.
Yeah, pretty much been in the space of building developer tools and security tools my entire
career. So I really hate the fact that we have a lot of security tools meant for security engineers
where the crowd is not really for the security teams to go do the work.
It's really for the developer teams to go learn what can go wrong and go fix things.
So I don't like using the word security tools. I prefer using developer tools.
So that's my take there.
Amazing. So I'm just curious, how did you get into security?
I think everyone always has this sort of windy path. I particularly have a windy path into dealing with security.
It was never my design. It was sort of like windy paths. I particularly have a windy path into dealing with security. I never, like it was never my design.
It was sort of like I wanted to build developer tools
and it just happened that like security
is like this huge challenge in building software.
And, you know, and I just ended up there.
So I'm curious, like how did you get
into security engineering?
Yeah, I was a software engineer that moved into security.
Kind of similar thing.
What I was building was for crypto stuff.
So not
cryptocurrency or any of that stuff, just pure key sharing mechanisms of how we do cryptography.
I mean, how do we get confidentiality done right? So started doing that, did a lot of math for that.
Thought it was a very narrow domain, really liked the space of generally building application
security stuff. And I was like, all right, let's go do that for a while.
And that's kind of dragged me into the space.
But not sorry, the software developer did a bunch of consulting to learn the space quickly,
then didn't like the breadth of going into things.
I really wanted to go deep down into a problem.
So I was like, all right, let me just go join a traditional product company and then help
them build this.
Well, so that was my background.
Amazing.
For those uninformed, because like, you know what, three or four years ago, this concept of security engineering,
much like, to be honest with you, platform engineering, which is kind of like better, slightly defined,
sort of popped up and we started talking about like security engineers and sort of this change in the way that some companies are
approaching security.
So can you help our audience, and specifically Tim and I, better understand what is security
engineering?
What is this concept?
And how does it compare to what people would think of in terms of the traditional AppSec
or CloudSec or network security team?
Yeah, for sure.
I think you ask this to different domains and different people, you'll get different
answers.
So I think it really points out to what your company is doing and what it matters for that
perspective i would say but i think the shift came because more and more people are like you
security folks don't know what you're talking about this is not how we do things this is not
what it is you've never written this stuff you don't really know how it works you are smoking
up something which it's not uh and
blabbering on our calls or whatever it is so i was like all right i mean if the real way to learn
how and to give relevant feedback to people is to have developers on the team or different types of
engineers like data engineers or info engineers or sr engineers be on the team and help us give
relevant feedback and build relevant stuff then that's the way we go.
So it's kind of a new paradigm where we would be like,
hire software engineers and then get them to build security stuff
or get them to build developer tools for security stuff.
So that's kind of the paradigm we've had.
How is it different from AppSec and CloudSec and stuff?
So I think AppSec is typically pinpointed towards
finding issues in your application.
That's kind of what their job has primarily been.
I think it's a little more nuanced than that, actually,
because this is things you can find and things you can fix.
I think more and more AppSec teams are also pushed
for secure frameworks and stuff.
You don't think about SQL injection,
you don't think about process scripting, C-SERV anymore
because of the things that modern frameworks have had that for.
But I think the job is to find issues
and, one, try to see if you can prevent them altogether.
If you cannot, then how do you detect that?
And if you can't do either, then how do you go educate people
so that they know what the issue of this is?
So that is how the typical AppSec team looks like.
More typically, they are heavy consumers of tools
that are being built and sold out in the industry
just because they are pinpointed focus towards finding issues.
That's all they really care about
and manually going looking for issues.
CloudSec is like, okay, how do you,
or infrastructure is how do you go get your infrastructure
that is running your product or your application done securely so that no one couldn't leave something misconfigured and someone else on the internet is scrolling through your ecosystem or whatever it is.
So just looking from a perspective of how do you do this right from an infrastructure perspective.
And was it a third part? I don't remember.
But yeah, I think that's like general senses of how people tend to approach these.
So it sounds like what I'm hearing from you is like,
you know, the traditional security teams
have always been like these engineering adjacent activities.
It's like engineering is a big black box.
We're going to watch and observe what engineering is doing.
And we're going to like hold them accountable to some KPIs
or some like remediation or SLO timeframes.
And we're going to like try and help educate it. So it's really a set of wraparound services
to engineering. And it sounds more like security engineering is more focused on
how we provide a set of services that are core to our workflows and our
SCLC, that are deeply integrated with engineering. So things are cured by design.
Is that a right way to think about the paradigm shift?
Yep, for sure. And maybe adding a little more meat to it, maybe is thinking of it
as more and more people are in the microservices ecosystem where they have hundreds of services talking to each
other in the ecosystem, depending on how it is, right? All of these services will have to do a few
things consistently, regardless of what the context of the service is, right? If you have
callers, you'll have to authenticate who they are, authorize who they are. You'll
have to do your observability right, your logging right, all that kind of stuff.
And so we'll see more and more teams
building frameworks and libraries
for their engineers to go to these things.
So the security engineering team over here
is writing security controls into those libraries
and providing services to support
the rest of the ecosystem to do these things right.
So one good example we have built
is like the thing about services authentication authorization. All our engineers ecosystem to do these things right. So one good example we have built is like, think about services authentication authorization.
All our engineers need to do is define one JSON config
and they get that for free.
They don't really have to worry about how it's done
and things in that sense.
We've built the rest of the ecosystem around that.
So similarly, audit logging,
you probably must have had some point in your life
as an area where you had something sensitive
logged in your accidentally
and you spent time go cleaning it up and redacting and all that
stuff. But the framework that we provide identifies the sensitivity of your
data element and then redacts it before it's even logged. So that
way it's handled from a security fault perspective. So those are the kind of things that
the security engineering team usually works on. And more on the
infrahabby side, I would say, think of it as, I don't think anyone,
I've ever met anyone who would say, tell me that,
hey, I'm not going to upgrade that library
because I don't want to be secure.
I think it's traditionally not that people
don't want to do the right thing from a security perspective.
It's just that there's no good way of identifying
when that change has introduced a bug or has broken
the ecosystem, right?
So if you think of it as a fundamentally a deployment problem, not a vulnerability management
problem.
So if you do have a really mature deployment pipeline that can actually look at, okay,
you have Bluegreen or Canaries or whatever that is, and then your newer version of it
that has a patching, is that behaving the same way as your previous version of the app
or similar way from a traffic pattern perspective,
then it's a healthy enough deployment, then you can just go to that.
So that's kind of how we focus on vulnerability management.
We don't do a concept of patch Tuesdays or any of that stuff
because it's kind of automated.
Most people don't even realize that they're getting vulnerability management
because it's kind of automated for Most people don't even realize that they're getting vulnerability management because it's kind of automated for them.
Yeah, that's incredible. So what is it in the industry that's happened that kind of led to this
revolution? I'm sure you have opinions on it, but this is new, right? New in the last couple of
years. What was it about the other model we discussed that has led to some companies,
not all companies, but some companies choosing this approach.
You folks are engineers or have been engineers.
How many times have you had security come and tell you things
that made absolutely no sense?
All the time.
I think that answers the question in many ways.
Basically, even the security teams were tired of being in a position
where we didn't have enough context, we didn't know what we're talking about.
And we just wanted to be sure we are smart enough
and equally capable of doing the things right in the room.
Most of my team is actually not even security engineers.
They're actually software engineers and infra engineers.
They're software engineers that I've kind of
lured them into the security space.
I'm like, you know all those times that people reach out to you
and they yell at you for not doing things?
Like, you want to do this right and make other people's lives easier?
And they're like, yeah.
And that's kind of how we got people to
join the team so
I think it's working well also the
concept of like whole shift left strategy and
like a lot of companies
out there have been trying to do that for a while
and I think it's
important to call out that it is a very
expensive endeavor and
even this DevOps model is very
expensive in its own way,
I would say, but SecDevOps on top of it is even more expensive.
So it's not something that is going to overnight happen.
It's going to be a couple of years or until you actually reach that maturity
to get to that place of actual value for the investment.
There's lots of these different types of notions of engineering.
You brought up data engineering.
One of the ones that I think about a lot is platform engineering.
Can you help us understand, in your view, what's the differentiation between platform
engineering and security engineering, and what's the relationship?
In your organization, is everything platform and you're a part of it?
You're a subset of the platform team.
What's the relationship, and how have you seen that in some of your friends' or colleagues'
organizations?
How has this come to be, and how's the relationship? And how have you seen that in some of your friends or colleagues organizations? How has this sort of come to boat? And how has this relationship worked?
Yeah, I think we are kind of a sister team to a platform team, I would say. I think we are in the
same realm, but only again, purely maybe also looking at it from a security lens perspective,
not trying to solve for every single aspect of the problem space that a platform team would
try to solve for, but only from a pure security lens perspective.
And I think this is common for the other teams and companies
I've also spoken to is, what security services are you providing?
Okay, that is handling your secrets
or handling your authentication authorization,
handling your data encryption, your tokenization part.
All of these are components of the platform that
most services that are dealing with something sensitive will need or even integrating with
another service will need. And these are purely like security solutions or functions. And I think
what security engineering teams focus is like really more on building these things out and
rolling these out for the rest of the ecosystem, but also kind of relying fundamentally
on what the platform team is doing
for the rest of the services, I would say.
So we kind of piggyback a lot
on what the rest of platform engineering is doing.
And we want to make sure
wherever they are building things,
we bake in security controls and defaults in there.
I've also seen teams where the security engineering
is part of the platform engineering team, actually.
It's not really part of security also.
So that's also another thing.
I'm watching your presentations from your team
and also looking at your background from Credit Karma.
One thing I definitely noticed is that
you guys have been building a lot of the products
for you guys that you're needing for security postures.
And I think this is kind of the mandate for a security engineering team, right?
You're not just here to buy things.
You're part of actually building and figuring out what to build.
And there's just too many things to probably cover.
So I'm just very curious, how do you choose what to build?
Because there's just infinite things to build.
And why build it
from scratch, I guess. Cause I noticed you do use OPA, 7grep,
all these open source projects. You're not complete from scratch,
but you definitely are taking OPA to do decision processing,
which is not what it intended to do in the first place.
So it's almost like taking existing open source tools and then put it together
for your needs. Right.
I feel that's the pattern I've
been seeing a lot from what you've
been doing. So can you talk about how you
choose what to build and why
you decide to build it in this
particular way? Yeah, I think
it really boils down to what the risk is.
I don't mean to
drop in the word risk and not
give more clarity in there. I think it really
defines what is it that we really want to solve for?
Are we sure there are no good vendors
that have already solved for this?
Like, am I going to try solving for being a CSPM
or a CNAP vendor?
Like, there are really good vendors like Wiz, et cetera,
out there that have solved this,
have realized how to do this.
Similarly, identifying sensitive data in your
databases or your unstructured data or any of that stuff these are like solved problems i'm not really
going to reinvent the wheel all the time and if it makes sense to just throw money at the problem
then we throw money at the problem but it comes down to like what the risk of that is like anytime
you introduce a new vendor to you're kind of introducing more points of failure and introducing more security concerns into the ecosystem so i
think it really boils down to where like one solid example is i think we were trying to build out this
in time access part of how people access infrastructure and data and stuff in the
ecosystem and yeah we could you go use one of these publicly available SaaS providers or
PaaS providers and then do it that way. But most of them, when we were doing this, required like
an Okta super admin token. What that means is if they were compromised and our entire identity and
everything around that was compromised, and that was a pretty significant risk that I was not
willing to accept just because it was not a good way of provisioning
a lesser privileged token at that point.
So we just went and built that in-house.
And actually, it kind of works out cheaper
because I think even if you go buy a tool or you get a vendor,
it takes one to two resources to go integrate that
and get that deployed in your ecosystem
because you're trying to make that tool work for your ecosystem
versus when you build something.
It might still take that one to two resources,
but at least pinpointed building towards what you need to build
or what you need versus all the fluff around it.
So be pragmatic, understand what you're doing,
and then just go with that as you take.
That's kind of leading to a discussion I want to have with you.
How do you choose between buy versus build?
Because you already lead it to, I need to make sure that the security tool itself doesn't become my security vulnerability itself.
You kind of have the risk within that risk tool itself.
And also the maintenance part of it.
And so you talk about CSPM and stuff like that. What are generally, what are security products and tools
you would definitely prefer buying
and where things are not preferred buying?
Is there like a way you think about this as a team?
There's both ends, right?
There's trade-off between like getting the upside
of using a platform or tool versus we doing ourselves.
And it's not often really that simple.
It's just configuring and resources alone, right?
Yeah, for sure.
I think you hit on a couple of things,
which was like, yeah, definitely,
what's the cost of maintenance?
What's the cost of building it?
Those are definitely on top of the mind,
the risk, which I spoke about.
I also think what's the user experience of it
and who is the audience of that particular controllers,
as I would say.
So one of the tools we built was something called Overwatch, which we spoke about in
several conferences.
But basically what it is, it's an orchestration platform that basically gives just-in-time
feedback to developers while they're writing code.
So there are ASPM, SaaS tools out there, et cetera, et cetera.
But the reason why we never went with any of those is because we don't want the developer
to log into yet another tool and get the feedback from there we wanted to meet where they are and
they are on github they are on their ids that's where they are so we decided to build this using
open source stuff like some web open source and breakman and all these other like things are there
and using gith GitHub's annotations feature
to just give them real-time feedback
while they're pulling an opening pull request
or things in that sense.
We have seen that it actually really helps
get things resolved
even before they are introduced into the code.
Because I've seen this in many other places.
Even if you find all those before even they are introduced,
just the need for someone to like go log into
the new tool look it up correlate their findings and then come back and then try to understand
where it is in their context and then like no one's going to take the time it's just going to
be backlogged and you'll never get to go fix it so just even like the developer experience or the
user experience of how this feedback is given to them how noisy it can get all of those are things
that i i'm a little conscious to
before I did make that decision.
I've spent a lot of time at Snyk.
And one of the things that Snyk sells,
it sells a lot of toolings.
But I think the unique thing it sells
is this vulnerability database.
And so I'm interested,
you mentioned about building this internal platform
using a bunch of open source tools,
like Semgrep open source.
How do you think about things like vulnerability database management?
Is that something you want to build and manage yourself internally?
Because oftentimes when I talk to security engineering leaders,
they're like, the reason I pay for act-bloppy bloops isn't because of the software,
it's actually because they have this curated data feed.
And curing that data feed is a massive amount of work.
So I'm kind of curious, where's the line here in terms of how you think about even things like that?
Yeah, and I think I'm in the same thing.
I think that's a solved problem.
People do this as a business, they are doing it.
And I would rather pay for that data and that subscription.
But again, I think the problem here is not about the data part,
it's about how you introduce that to a developer
and how do you introduce that to your stakeholders and how do you represent that and give them that context and feedback.
So I think that part no one can solve unless and until you take the time to go integrate that well
into the developer work stream. And I think company to company, team to team, the developer
ecosystem is changing. Like the tools that my backend engineers would be using
are not the same tools that our data engineers would be using.
So giving the right context from the right tool
at the right point is something that you'll have
to like orchestrate yourself to get that right.
But what we are doing is just the orchestration part of there.
Like it's not really the finding the new data
or the source of like the bug or something.
That is all we're getting it from telemetry from different sources,
and then we're just feeding the right context and feedback to the people.
You really care about the integration, the reliability integration,
and the customizability of the integration, most of all,
when you think about what components you bring into your stack.
Yep.
I'm very curious, are there areas where this is an area
where it's very clear to us
that we're going to be buyers
and this is a very clear area
where this will be builders?
Or how do you think through that capacity?
I'm pretty open to reconsidering
every decision we've made.
And I think like at all previously,
there are sometimes decisions we've made
in the past that made sense
that no longer make sense at this point.
So we can go undo it or go buy a vendor or go build it in-house depending on what
it is and i think with a whole new wave of tooling that's coming now i think a lot more of those
decisions are going to be made by not just me but i think many many companies it's like do we really
need another sas vendor over here we just can we just automate this using agents or whatever
one thing i was talking about is like the CNAP or the CSPM tool,
like it just doesn't make sense.
Like we are a cloud user.
It doesn't make sense for us to go reinvent the wheel there.
We're not going to keep up with all the new products and services
that they're going to come up with and write misconfiguration rules
or whatever it is.
Like it just makes sense to just throw money at this problem.
There are things that you also have to be careful of
from a regulation perspective.
And especially if you're in a regulated environment,
actually sometimes regulation asks you to say,
you cannot do this in-house,
you'll have to get a third party to do it.
Just because there's a different person
that's not you who's evaluating your own posture.
So there are a lot of these considerations
that go into place.
But I, for one, I'm pretty open to re-evaluating all the decisions
we've made at one point just because
it may not be the right decision.
So if the right vendor came along with the right
value and that
was directionally the right
thing for the next phase
of the business or whatever's going on,
you're like, look, I'm very interested, but you
really have to meet, it sounds like your
bar is quite high because you obviously have a very high, great, good engineering team. And so it's like, it has to'm very interested, but you really have to meet... It sounds like your bar is quite high,
because you obviously have a very high, great, good engineering team.
And so it has to be better than what we build internally.
And it has to be something that we're going to spend a lot of time maintaining.
And so instead, I'd rather basically move that from a headcount cost,
if you will, to a services cost or an expense.
Okay, that makes a lot of sense.
Which is not different, really, to be honest, than platform engineering.
So for a vendor's perspective,
it's the trick for dealing with you
is come to me with like
something that's transformational
is a core component I can trust.
Now, like, what does that look like?
I know, you know,
Chime is a financial services company, right?
So in your regular industry,
incredibly high bar
to what type of things you buy,
which is probably why you have,
you index much more on buy's build, is a business.
That makes a lot of sense. If you were
to adopt a vendor, are there things that just
have to absolutely be correct for us to
consider it? An example could be, we have
to be able to run on-prem. That might be one thing.
There's no option that has to run on-prem.
I'm curious to hear how you
navigate those
different aspects of thinking through this.
I, for one i'm not
a huge fan of the whole what we have done in the whole industry together from a dprm perspective
the whole third-party risk management concept i think it's a bunch of form-filling and understanding
and lying to each other and all of us are accepting that we're lying to each other. But I think a more pragmatic approach of what it is, I think I would say is like, okay, if you are a SaaS vendor
that integrates with something to do with the data or a platform or infrastructure, et cetera,
then basic things like you've got to have a security program. You've got to prove that
on paper that you've been able to remediate your findings in your own defined SLAs.
And I'm not looking for the cleanest pen test report or whatever. I'm just like, okay,
everyone finds issues, everyone fixes it. Do you have the discipline to go fix it in your
own defined SLA? That's kind of what I'm looking for. And things like, okay, you don't sell our
member data. You have SSO for how employees authenticate to you that we don't have to worry about off
boarding and managing all that stuff. So there is like five to six very practical stuff that I
like to look for in that evaluation process. And yeah, I think it's been like a maturity over time.
Like now I think we are at a point where I can also say like, yeah, I'm going to integrate with
that vendor because they don't even care. Clearly, they don't even care about doing the right things from a security perspective.
So and other security tools, I mean, this is for all of the tools that we integrate
as a company.
So this is the same lens of what we have.
So I want to transition us to our favorite section of our podcast, what we call the spicy
future.
Spicy futures. section of our podcast, what we call the Spicy Futures.
So, sir, we want to hear what is your spicy hot take here about security overall?
What is something you believe that you don't think most people believe yet?
I think more and more security teams are doing themselves a disfavor when they keep growing their team
and taking up a lot more things under their umbrella.
I think it's really about how to keep people accountable
without necessarily taking up the accountability on yourself.
So I think in many ways,
security teams are going to get smaller in time
and really a lot more of the traditional analysts, low-level engineers kind
of things are going to be automated out and the security team size are going to get smaller
in many ways. I think there is a push, at least I've been trying to push more and more people
to think about how just because you care enough to give someone feedback or review in something
doesn't mean you are accountable for it. It still someone building their product you're just reviewing it from a security lens and telling
them what's wrong it's not doesn't mean that now i am responsible for that security like i think
we have to like distinguish that and i think there are good examples of this where it has worked out
well like if you think of legal teams just because a legal team is giving feedback on how a particular
product needs to be built and shipped out.
It's not necessarily their responsibility that something went wrong in there if the product team doesn't decide to go that route.
To answer your question, Tim, I would say I think that's more specific.
Because while the engineering functions may grow, I think the concept of a traditional security team will get smaller with time. Just because I think at one point we've always been trying to match out
a ratio to say for every 100 engineers, for every 200 engineers, there's one
security engineer. But I don't think that's going to remain the case
in a few years from now. Actually, I'm very curious because security is not going to go
smaller and smaller on the surface, right? I think actually the world of security is changing
rapidly because our whole world is changing rapidly.
Like AI, for example,
is going to make everybody go crazy soon, right?
So we're going to talk about this later,
but I'm very curious how that thing you just mentioned,
where security teams get smaller,
security engineering teams get smaller,
where more people get responsibility,
it doesn't seem to just happen naturally.
There's like a culture plus tool, but something that needs to happen
where, because I've been a developer
most of my time, I've been in a security
engineering team. Security was like a necessity,
but not truly our
responsibility fully.
It doesn't seem like the adults
doesn't let us trust us anyways.
So security teams are like,
okay, you guys are just like good enough,
but you know, not that good.
So we'll take over.
That's a feeling, right?
But then how do you enable everybody else
to take full responsibility?
Because we had to be reliability.
Now we're also doing security.
It might be too much, right?
Like how do you actually see the world
get to that point?
Yeah, that's a good question.
I think the problem spaces are going to change,
but we still have things to solve for.
I think there are a lot of things that will be automated out.
There'll be a lot of things that will be taken over by AI agents
or whatever it is out there, and that's fine,
but we will still need a security to go look for the newest set of issues out there.
So I think a lot more of the resources are going to be,
and this is true for regardless of what way of technology is coming in,
I think a lot more of the resources are going to be automated out
while being replaced with issues that you cannot yet solve for.
And I think a lot more of that, especially in the new way of things,
is not deterministic in nature.
Like if they got some of the issues with AI agents and things in that sense And I think a lot more of that, especially in the new way of things, is non-deterministic in nature.
Like if they've got some of the issues with AI agents and things in that sense, they're purely non-deterministic in many ways.
It's not like a SQL injection or a crisis scripting where you have a defined set of control flow or data flow.
It is pretty non-deterministic.
You can ask the AI the same thing twice and it might give you two different answers regardless of what your weights are and what your temperature is
and et cetera, et cetera.
So that is by itself
is a non-deterministic nature.
So now you're trying to solve
for issues that are no longer
looking for typical cross-site scripting
or SQL injection
or misconfigurations.
Like those things are going
to be automated out
very easily, very quickly
with a lot more context.
You can actually do that
much more better
than a human can do it.
But now we are going to have these kind of things.
So I think to answer your question,
how are we going to get smaller?
I think I'm not saying it's going to get smaller.
I think it's going to get smaller
because of the rest of the function is going to grow.
I think more and more security teams
are going to just remain stacked
into what the current size is in many ways.
Or we are going to do a lot more with the current size is kind of what I think
is going to be what the new feature is.
And so in order to make that, I mean, look,
if you're going to have the same remit,
same area of expanding scope of control,
but headcount is static at whatever it is,
20, 10, some percentage of total engineering,
how do you scale this? How do you scale? I mean, obvious is the answer here. How do you scale this?
Obviously, the answer here is you want amazing security vendors you can buy.
Is that the answer? How do you think about that?
I think making use of what everyone else is using is one way to start off with.
I think if you're saying,
all right, we are going to remove the cost from humans in the team
and replacing it with X number of vendors.
Now you're like not necessarily reducing the perspective that security is a cost center.
You're still going to add to the costs.
I think you've got to really imagine this to be how is security enabling the business.
And some of the things that we have tried, which have actually really worked for us, is also focusing on helping other projects
that actually help with things that the business cares about.
So for instance, we are in a consumer-facing business,
so we look at the NPS score,
or how do people even really like our products
and things in that sense.
And at one point, there was a huge downfall of like,
okay, people perceived that their account security
was not good enough.
And then we were like, okay, we partnered with
our product teams and built out a security center
in our app and got that rolling out.
And that improved the NPS part of the app
because people stopped complaining about that
and now enabled the business.
So I think more and more security teams are trying now
to move into the world of like,
okay, we're not just a cost center.
We're not just going to do a side enterprise security function.
We're also going to try to see how we can be
an enablement for the business.
So I think, yeah, back to your answer,
I think we're going to use the same tools
what many other people are doing.
If you're using AI agents to write code,
we are going to use AI agents to write security controls.
I think we will have to figure out
what the whole industry is figuring out if we're going to do it that way write security controls. I think we will have to figure out what the whole industry is
figuring out. We'll figure it out that way.
I think that's a great answer
and a good segue to my next question.
You know, we've spent
as an industry over the last few years
talking a lot about co-pilots, right?
And I have an inkling
and we've kind of seen this to a certain extent with some
things like Cognition AI, sort of this
next step being agents, right?
Or some form of agency in the sense that like a computer is off going to do something and it may or may not come back to the human, right?
And it's kind of like this now the semi-autonomous may be on its path to full autonomy, right?
And that might like from what we've seen with self-driving, that path is probably pretty ridiculously long.
I'm really curious to hear from your perspective, even if we just think about agents in terms of how you enable your team to bring AI into your workflow and potentially bring agents in.
What do you think about the security considerations that you have in making those decisions?
I think co-pilots are one thing that's basically just a significantly better autocomplete. And so the security model is really not that much of a concern.
I mean, to a certain extent there is,
but if it's sending the data telemetry out of your stack,
then that's obviously a concern.
But ignoring that problem for a second,
broadly speaking, the security ramifications are pretty simple to understand.
I'm really curious about the agent workflow,
which seems to be the 5, the next 10x step
in terms of developer productivity if copilot is for a 50% increase.
So I'm curious, how do you think about what are the security ramifications?
What do you think needs to be in place for that to actually be adopted?
And do you think we get there?
Yeah, I think the one thing I'll call out is human in the loop.
Everything that you spoke about right now has a human in the loop.
And I think that's the only thing
that keeps me go to bed
and not be all the time awake
and scared shit about what's going to happen.
So I think that is a control
that people are going to rely on
because like I said,
it's a very non-deterministic space
where we don't even know
how to go look for some of these issues
and programmatically go fix them. Sometimes you got to have agents solving looking for what the
agent is doing i think supervisor agents are pretty common now but you're trying to like
there's one process that's looking at everything that the agent is trying to do and trying to see
if it's trying to do something that's malicious or bad intent or whatever it is it's a nice space
i would say because again the intent the context all of those matter and what you're trying to do also.
Because even with like chat GPT, when it came out,
people are like, oh, you should get this something out there
that's always looking at people's chat.
And I'm like, what do you want us to look for?
Like, do you want to look for if someone is exposing secrets?
Then, okay, we can look for that.
But I think what really boils down to is who's the user
and what context are they using it for, right?
If they're using it for a social media specialist that's trying to come up with funny tweets, I'm not worried about secrets in there.
I'm worried about, is there any slur?
Is there any, like, can this tweet be misinterpreted?
Like, are you looking for, like, there's a whole different class of risks that are not related to purely what security engineers are looking for right now, I would say.
So depending on the context and who is using it and what is using it, some of those can be harder to look for.
The approach we have taken for the most part is we're trying to like operate a lot of the back office operations where there's not necessarily human always in the source of the prompt or being a way
to manipulate what an AI could do. Simple things like automating basic jobs, data gathering,
data enhancement, transcribing, document clearing, all those kind of operations are what we're using
a lot of the AI for because you can hard code the prompt, you know what the exact data input is, things in that sense.
So I think a lot of people are starting from there.
But I think the biggest control we have right now is the human in the loop.
But I think anyone who's trying to automate the entire workflow end-to-end,
that is where one would have to be really, really careful about
to see how do you actually add delimiters and point-in-time checks
during the workflow
to say that something that's happening
is not something that could go take down the ecosystem.
Yeah, there's obviously a very different bar
between you brought this up,
between the agent helping someone do social media posts
and the agent that potentially is inside your SDLC
helping you get better log compression or something.
I think a lot of the developers,
I mean, our employee productivity tools
are pretty okay for us.
I think the bar is pretty low.
You can go play around, do things
because it doesn't necessarily have a direct impact
to the product or the app directly
because it still goes through a code review.
For instance, if you're doing a co-pilot,
you're using something to generate PRDs or TDDs
or whatever it is,
then you have your other peers reviewing it.
One thing we've realized is a lot of these agents
are better at critiquing than at generating stuff
because it tends to list how to snick less
and tends to give more meaningful stuff.
So the way we have been using it is
how do we get these agents to actually go critique
someone's code or their design or
their Figma designs, et cetera, before they actually open it up for their peers to go review.
And also feed in all the requirements right there is like, this is what you would be expected if
you were doing this particular operation. So are you thinking about that? Those kind of things.
So the bar is pretty low. I mean,
in terms of like, for any one that's using it for a developer productivity, but I would say if you're
trying to like, have a new Gen AI chatbot in your application that members can talk to, the bar is
going to be really, really high, I would say. So that is where we would be a lot more involved,
a lot more working with them to see what controls.
And more often than not,
people are now just trying to contain the issue.
They really don't know how to 100% solve for it
because, again, it's not a dependency problem.
So the more people are thinking right now
is just like, how do we contain it?
Awesome.
Well, I think that's all the time we can have
to squeeze all the security questions we
could but uh awesome to have you on do you have anywhere you know listeners like to
let's hear more from you whatever where can place us to find you uh social media whatever
yeah i'm on i'm on twitter that's the only social media i really use um or x uh that's what they're
calling it now um i go with sposh dataata. I can leave that to you, Tim,
and you can put that in the notes.
And yeah, I'm also on LinkedIn
for reasons that we all need a job
and we got to be on LinkedIn.
Yeah, be quite gently.
We're still there.
Yes.
Well, thanks a lot.
So thanks for being on the pod
and hope you enjoyed it.
Thank you.
It was great talking to you.
Bye-bye.