The Infra Pod - How is AI changing the security in modern fintech? Chat with Branden (Head of Infosec) from Mercury
Episode Date: June 16, 2025In this episode of The Infra Pod, Tim (Essence VC) and Ian (Keycard) are joined by Branden Wagner, Head of Information Security at Mercury. They delve into the nuances of Information Security, discuss... the interplay between human and technical controls, and explore the evolving role of AI in security. Wagner shares insights on fostering a collaborative culture and building secure systems, providing valuable advice for both engineers and security professionals.00:00 Introduction and Guest Welcome01:07 The Importance of Security Culture04:40 Building Collaborative Security Programs10:54 Integrating Security into Business Practices21:30 The Role of AI in Information Security32:22 Spicy Future: The Impact and Limitations of AI
Transcript
Discussion (0)
Welcome back to the Infrapod.
Tim from Essence and Ian, let's go.
Awesome. I'm so excited. It's been a hot minute, at least from my point of view.
Tim, we today are joined by the head of information security at Mercury Bank, Brandon Wagner.
How are you doing today? We couldn't be happier to have you here. Excited to be here. It's always fun talking to people about security, tech, infrastructure, but I got to make a
small disclosure, right?
Mercury is not a bank or financial technology company, but you know, got to do those things.
Very fair.
Very fair.
Hopefully I didn't violate some law in a moment there for you.
So you know, Brandon, I met a while ago.
I worked in information security,
financial technology, hyper trust-oriented environment.
We had a really amazing conversation about security engineering,
about just an approach to security. to build secure companies and build secure systems. I think you have a pretty unique and interesting viewpoint on this
that everyone would love to learn.
I mean, I wouldn't say that it's unique.
I would just say that maybe it's in the minority there,
but there's quite a few of us, right?
In fact, I would have to say that a lot of my ideas
have come from mentorship from all of those that came before me, right?
If you're familiar with Rick Howard and his book,
Security First Principles, if not, I highly recommend it.
But that's where a lot of this stuff comes from,
is really like thinking about the right ways
to approach these things.
And the first one I would say is actually culture.
Everybody's always interested in starting with tech.
And while I agree that tech is like a huge thing,
tech without culture is like
a plan without strategy.
You're just doing things, right?
And so that's where it really stems from is like the
transparency and security.
And it's hard.
One of my biggest complaints about the security industry is that all of these
security tools are made for me or SOC analysts or security administrators, right?
And it is my personal belief that
that is the wrong way to look at security. These tools should be built for
everybody else. The typical Bob and Alice, right? Like they're the ones that are
actually involved in these security incidents or not incidents if we get the
tools in their hands in the first place, right? And so we've done a lot of things
like putting corporate policy or all of our InfoSec policies in a GitHub repo where anyone in the company can contribute to
them because like it's one thing for me as the head of InfoSec to say like, yeah,
we're going to use this encryption library.
It's another thing for the engineers to be like, yeah, no, we're not actually
doing that.
Here's what we're actually doing.
And like you said to use AES 256, but like, ow, we wanted to say that we're
going to use this library over here.
And so like putting it in their hands, putting it in Git where literally anybody
in the company can contribute.
Sure.
It still falls on me to approve it, but like, I don't know everything, nor do I
want to, I mean, I do actually want to, but like I can't, right?
So putting that kind of stuff out there and in the open where anybody can do it,
makes it so much more transparent.
It makes them have more buy-in.
It makes them do things that like, otherwise people are just like,
yeah, security's domain over there, let them manage it.
We shouldn't be managing it.
We should be building the guardrails and letting them operate on the road.
And that's really what we try to do with our entire security program.
Okta is terraformed, right?
So that, you know, rules on how we authenticate,
your session expiry, all those kinds of things,
can be manipulated by other engineers.
It doesn't mean that they're going to, or that it's going to get approved,
but when they want to complain about it, it's really easy to be like,
here's the code, go write to PR.
And sometimes that's enough for them to be like, oh,
I now understand the complexity of the problem,
and I'm no longer complaining.
And that's great.
Other times, like, yeah, you're dumb, and we just fixed your PR,
which is also great.
So it's like a win-win situation.
So not only is it just infrastructure as code,
it's everything as code.
If you don't terraform Okta, you? You're behind this whole like messy UI where you have to like do these stupid
permissions where like a security administrator does these things and
there's no visibility.
I don't know why he made the change.
There's no context there.
She just went in there and changed somebody's random permissions.
Or I put it all in a PR and I've got full context and the ability
to roll back.
So all of that traditional stuff you get with software development lifecycle, we try to
do with security development lifecycle.
So I can use the same acronym, right?
It's always the SDLC.
I could talk all day.
Where do we want to go with this conversation?
I mean, I think that that's great.
I mean, that's we want to learn more.
I'd love to understand, you know, I think you probably have laid out sort of how you want to go with this conversation?
we do as a business. And I'd love to understand, okay, cool.
So you kind of have built this,
you believe very deeply in enabling this collaboration
and this handshake.
Now, what does that look like when it comes to
actually building security programs?
Kind of this base layer, but how does it think about
when you go to think about things like technology choices
you make and what types of rules and processes
you put in place and the way that you approach sort of like
what requires sign off?
Are you a proactive, reactive, you know,
are you sitting out from the outside,
are you deep inside like the design meetings?
Like how does this all come to fruition?
Cause that's, so many people kind of get,
I would say broadly wrong,
it becomes territorial and adversarial.
Like how do you prevent that from happening?
How do you actually have a great relationship between these two
very important constituents inside of business?
Yeah, no, that's a fantastic question, right?
So there's a couple of different ways to look at it.
The way I did it when I first joined Mercury is I sat here in silence
in the infrastructure, like weekly meetings, for probably about three
months, right? Just like learning who they are, how they talk, what their priorities are,
and slowly just kind of built up my reputation. And like, I actually didn't know Terraform prior
to coming to Mercury. Like it was a thing that I'd always like known existed, but I'd come from other
environments like Ansible and Chef and different
types of things, right? Red Hat, Kickstart files, things like that. But Terraform was
new to me. My level of AWS knowledge was woefully inadequate. And so it's really hard to be
that guy to be like, well, you're doing it wrong. Shut up. You don't know AWS or Terraform.
How are you telling me I'm doing it wrong? I have made that mistake in my career many times, being that security guy.
Like, well, are you doing things securely, but you can't even explain what securely means?
So it really comes down to just kind of shut up and listen, right?
Build your relationship with these teams. Let them understand what it is that you care about.
And while security does not equal compliance, compliance is a fantastic
place to start. So you tell your story. Why do you want to encrypt, I don't know, HTTPS by default?
Whatever it is. Why do you want all of your buckets to be non-public by default? Whatever it is. You
tell that compliance story and start from there. And there's always one-offs, right?
Like we don't have access logging on our logo bucket
because like that's downloaded hundreds of thousands
of times a day.
Probably don't care if people are downloading
the Mercury logo, right?
But your access logs, right?
For all of these things,
you definitely want controls there on those buckets
or transaction logs, right?
You want to have access and monitoring on all those places.
So you can't just say like, oh, go encrypt everything or go log everything
because everything is a lot and everything is dumb.
If we literally logged everything, our S3 bills would probably be, I don't know,
hundreds of thousands of dollars a month, which like is not good for the
business and it's definitely not good for me trying to explain that to management.
So I guess to circle back on that, like just shut up and listen, build your
rapport with the teams, figure out what it is they care about, and then weave
security into their story.
Don't just come in like a hammer and be like nailing security in all
of these different places.
It doesn't work.
And so that's where you start with that culture.
What are the things that they care about with infrastructure?
You know, they care about security, but they may not necessarily
think about it like security does.
So you got to tell them the right story.
Why do we want to, you know, object versioning on all of these things?
Well, if something bad happens, whether that's a bad actor or,
I don't know, we accidentally
overwrote something, how do we roll it back?
And then you can get into like your backup strategy or like, how do you roll back Terraform?
Whatever it might be, right?
Those are security things, but those are also infrastructure things.
You tell that story.
And then you talk about data team, right?
Like who should have access to that data?
Oftentimes the answer I get is everyone.
And then you have to, you know, tell the story like, okay, so Alice in marketing
wants to send some sort of notification to all of our customers about some new
feature.
So she just accidentally dumped the social security numbers of all of our
customers because she doesn't write the SQL like, whoa, well she shouldn't have
access to that. Two minutes ago, you, she shouldn't have access to that.
Two minutes ago, you just said everyone should have access to everything so they can write
their queries.
So which is it, right?
And you have to, you know, work with them, understand the use case, understand the tools.
And I think that's where a lot of security people like fail.
We can't know all of the tools.
It's not possible.
So you have to build that culture and that relationship with the people who do know the tools
To tell you that it can't be done or that it can be done and what the pros and cons are
I came from an engineering background. I never really worked on security roles before but obviously
Have to work with security in most of the situations, especially in the larger context
And I think given the financial institutions that you work at,
there's a lot more scrutiny of things that needs to be done.
Right. I believe I guess the question here is like,
I think for folks that maybe it doesn't even work in security,
looking at what you do,
it may not really fully understand the scope of what you do,
because when you think about security,
there's so many things you can secure,
like all the way from devices, your data to everything and then your code specific and in your role
is your a particular like, okay, this is what I have to handle or can even able to handle
is all these things or include infrastructure plus app or something like do you have like
a certain like scope I guess
that's maybe easier to understand or maybe this is like hey I'm going to be the last
gate of hell that does all the defenses here or not.
Mercury we actually have a I don't know an interesting structure we've got two different
security teams right so we've got security, which focuses a lot on the product security and the security
features of the product.
And then we've got InfoSec, which is my team.
And we work really, really closely together, pretty much on a daily basis, but we kind
of have distinct carve-outs.
My team doesn't do much with the application security itself other than like
pay the bills for the bug bounty program because the security engineering team is actually the ones
that are you know finding these bugs and fixing them. But where we do collaborate right is is
really defining risk and so if you think about it like what you asked like what is the scope right
so to do that you kind of have to break down terms or define them, like, what is information security? And my definition
of information security is actually stolen from Evan Francine, who's the CEO of FR secure,
and a few other things, but it's managing risk, the confidentiality, integrity and availability
of information through administrative, physical, and technical controls.
Which is like a super accurate and broad definition all at the same time.
So to your question, where is my scope?
Wherever there's information.
So that's like super amorphous.
The things that people don't think about when it comes to InfoSec is that it's not all technical controls.
They're human controls,
right? Again, going back to culture, 90% of what I do all day is talk to people. I have probably
written myself in the last two years, less than 10 lines of code. In fact, sometimes I've been told
not to write it. I'll let those people do their jobs and they just tell me to go away and let them write. And that's okay. I talk to people, I understand what their challenges
are, what data they're using, and really just play like a sounding board, right? What happens
when, you know, Timothy gets access to this data? What's he going to do with it? Now he
just shared it with his spouse, partner, friend, dog. Does that have any impact on what you're trying to do?
Uh, now both Timothy and Ian have access to that data.
Now what?
Right.
Is that a problem?
Maybe it is.
Maybe it's not, but if I don't have that conversation, I don't help them
develop their own threat models inside their own head.
If you can help everybody build the threat models appropriately, then like
you've eliminated a lot of those risks already.
And being transparent and collaborative about those things
helps other people then chime in.
So a lot of this comes down to
have these conversations in public.
And while it wasn't my idea,
one of the greatest things that I think Mercury has done
is we have this pre-shipped channel
where anytime somebody wants to release
some sort of new feature or product, it goes into that channel.
And it's got like the project requirements doc.
It's got the link to where they're actually discussing these things.
So that whether you're on security or engineering or privacy or compliance, even marketing can
chime in and be like, hey, you actually don't want to do that for these reasons or like, did you think about X, Y and Z? Maybe you did, but it's not in your
duck, right? And having all of those conversations out in the open is super awesome. I mean,
I've got all my keywords set up in Slack for all the things that I want to know. And so,
you know, like, I mean, dark mode is cool, but it has no tangible impact on security. But some new feature here about like displaying widgets with sensitive data.
Maybe I do care.
Right.
And so having those conversations, helping people build their own internal
threat models, being public and transparent about those things works.
Did that answer your question?
It totally did.
I mean, Tim's here shaking his head with the audience at home.
I think one of the key questions I have is how is this changing?
We've seen this change, at least on the AppSac or product security side, where we've moved
to a world of much more embedded security engineering, more embedded in the platform,
more embedded in the platform engineering organization tends to have a security engineering
team now, which is that new and interesting.
I'm really curious to understand how has the traditional domain of information security and the platform engineering organization tends to have a security engineering team now,
which is net new and interesting.
I'm really curious to understand how is the traditional domain
of information security changed, and how is it changing?
What do you think it looks like to build information security teams
five, 10 years from now?
Are they still completely off, just partnered with,
but off to the side and around on an enterprise focus?
Is there some amalgamation that's going to happen? and around on enterprise focus?
tooling. And so if you look at some of these traditional companies, you have an IT department and a security department. We don't. We have an InfoSec department that also does IT. Because if
you think about it, a lot of those challenges are kind of interwoven, right? Access control is often
done by security, but also by IT. And so, you know, if you look at some of these companies,
and I'll try not to name names, but a lot of their
breaches have been human failures or their data leaks have been human failures because
IT didn't understand the security controls or IT was outsourced and had misaligned incentives
because IT's job was to close the ticket as fast as possible, not necessarily thinking
about the security.
So by bringing IT under security, you kind of fix those two things.
In fact, I only have like one person whose job is IT at Mercury.
For a company of over 850 employees, we have one full-time IT person.
The rest of that is, you know, security.
And so, you know, thinking about that, that
person focuses on a lot on the IT automations under the purview of security. And all of
that really comes down to like, what is the business trying to accomplish? It is super
easy to secure things. Really, really easy. Security is the easiest thing ever. You just
turn it off, you bury it underground, and put a guard there. It is 100% secure, always. Completely
useless, but 100% secure. And so again, it comes down to telling that story. Why does
the business need X? Whatever it is, why do we need to share this data? Why does marketing
need the birth dates of an individual? And then you craft your controls around there. Marketing might want to send a birthday card or a founder
anniversary card, whatever.
They don't necessarily need the year, per se.
Maybe they just need the month and the date.
So you think about your tools, you
redact or mask the data to give them what they actually need.
Maybe data is trying to do some sort of analysis on how many customers use, I don't know, Gmail versus Outlook. Well, you can mask the data
so they don't get the usernames, but they just get the domain name of the email. Again,
you have to understand the use case to craft the controls. Too many times security just
tries to like blanket apply controls. But like the way data uses things and the way engineering uses things is
different than the way marketing or compliance use data.
So everybody's got a different job in the business and those controls have to
be designed for each of those people.
It sucks sometimes because it's a lot of work, but that's where that culture comes
in, that's where that transparency comes in. That's where building those relationships
to talk to people come in.
And you need to take it a step beyond that.
So one of the things that we've done is,
our catchphrase is personal security
is the foundation of corporate security.
Which means like, if you don't know
how to use a password manager, that's on me, right?
Like I can't expect you to keep our secrets secure if I haven't given
you the tools to learn how to use a password manager. You want to talk about working from
Starbucks? So we give you training of like where and how and why that makes sense and when it
doesn't. If you are reviewing people's passports and social security numbers, you probably shouldn't
be doing that at Starbucks. Or if you are, think about like where you're positioning yourself, what window you're sitting against. Can somebody walking by see
what you're doing? Okay, so that's one control. We've now covered the physical.
Now let's talk about the technical and the administrative, right? Like what are
you printing out? Hopefully the answer is nothing. In fact, it usually is nothing
at Mercury. But, you know But the technical controls, right?
Like, are you on the VPN?
How are you managing that connection?
Who's managing that connection?
Do you trust Starbucks Wi-Fi over somebody else's Wi-Fi?
I don't know.
But thinking about all of those different things
and giving you the right tools and the right training
so that you can do those things.
Kind of tangential to that is also thinking
about like patching, right?
A lot of enterprises want to force patches on end users.
And, uh, if you think about that, that actually creates an interesting kind of
like dichotomy in the security culture.
It puts the patch rate and the execution in the hands of security and IT, but it
takes away the responsibility and the awareness
from the end user. So if the end user is no longer thinking about patch management,
they're no longer thinking about keeping their software up to date and the vulnerabilities
as a result of it. So if you shift your thinking and just provide them the right nudges or the
right tools, they'll be like, hey, your Chrome browser is out of date. Here's why that matters to you.
95% of the time, they'll do it themselves.
For that remaining 5%, you craft guardrails
with an expiry time.
You didn't update your Chrome browser within 14 days.
Now we've locked you out of the system.
So you have to give them the awareness.
You have to give them the tools.
But security is really just there
to kind of build those guardrails
and help everybody understand the context of their own things. We have to be part of the tools, but security is really just there to kind of build those guardrails and help everybody understand the context of their own things.
We have to be part of the business, but we have to make ourselves part of the business
because most people, most companies still see security as the department of no, which
is like unfortunate old cliche.
But really, my team's response should never be no. It should be, yes, and here's how.
Of course, there's still some nos.
You want to put a bunch of social security numbers
on Pastebin, the answer is no.
But most of the time, there's some sort of business reason.
What are you trying to do?
Oh, you need to get those over to some subpoena
or some partner for contractual reasons,
regulatory reasons.
Makes sense.
Let's talk about how we can accomplish that
rather than just saying the way you proposed was wrong.
It's really fascinating to hear that
it's really about the human communication
and then kind of the rest sort of follows.
And given this such a big surface area
and all this sort of things that that can go right or wrong.
We're talking about like how AI is changing a lot of things and you know, given our ENI's,
you know, our viewpoints seems like AI is trying to kill everything or change everything.
I wonder in your world, what is that impact looks like right now? You know, we've seen
a lot of people are using AI to build more tools, to do one-off utilities, but I think it's actually getting more and more deeper at this point.
So, curious how you're seeing this now.
What is the things that you see this immediate impact for you?
You could use it as a tool, but it could also be another attack factor for you as well.
So, very curious how you're seeing or getting prepared to use this.
Oh, I mean, I hope in the perfect world, right.
AI puts me out of a job.
That would be fantastic.
That means security has been solved by AI.
I don't see that happening anytime soon, but if that happened, that'd be fantastic.
But to say like AI is both an enhancement and a security threat is equally as true
of saying the same thing about the internet, right?
If we go back, you know, a long time, dial up, or even before the internet, right?
Like, information wasn't available.
So that made it a whole lot harder to steal information.
That also made it harder to learn information.
The same thing is true of AI.
Yes, it is easier to steal things. Yes, it is easier to steal things.
Yes, it is easier to get them. How you use the tool is really what matters. I use AI
almost daily, just like drafting documents or in arguing with myself, right? Like, here's
what I want to do. Tell me why I'm wrong. And it's a good sounding board. Or like, I
need words to help me finish this thought,
and I'll type out like half a thought and let AI finish doing it.
AI is not great at coding yet,
but it does help get your point across.
As someone like me that has a lot of good ideas
that my team tells me to stop having,
I use AI to draft that code sometimes, to get the point across.
Right?
Like, this is what I want it to do.
Here's what I think it should look like.
It's a great accelerator.
Or I need a new policy to talk about, I don't know, security and training awareness.
Great at that.
I'm actually working on right now a security awareness campaign using chat GPT and Slack
to, like, do a this day and security
history kind of thing.
AI is great at those kinds of things.
So like use the tools correctly is really what it comes down to and every
tool has its risk.
I mean, we said that the internet was going to destroy jobs and it did, but
it also created a whole bunch more.
Is AI going to destroy jobs?
Absolutely. But until AI is
actually intelligent, I don't think it's going to take anybody's real jobs. It'll just create new
ones. Again, if it did take my job, we've solved security. That's fantastic for the world. Can't
wait till we get there. I'm curious, how do you think about AI and information security?
Like what parts of InfoSec and security broadly do you think are most ripe to be automated
or ripe for AI to come in and finally upend one way or the other?
At the end of the day, my understanding, a lot of security unfortunately is a lot of
toil.
It's a lot of toil tasks for how awesome it can be.
So I'm kind of curious like to get understand from your perspective, like where do you think these new LLM specifically, not that AI is brand new, but like these new LLM specifically and the features that they have start to become very interesting in applications and security. finding needles and haystacks, right? Finding data that is different, finding attack paths.
And so I think one of the places that I'm seeing the most like tangible benefit with
AI today is like sock analysts, right? But it's not replacing sock analysts. I mean,
I guess depending on your scale and maturity, maybe it is replacing your tier one sock analyst,
but it can definitely be
an accelerator there.
Putting all of that data together, telling the story, looking at tons and tons of data
quickly and putting that stuff together.
There's other ways that we've been doing it with anomaly detection and things like that,
that aren't technically AI, but AI can combine that anomaly detection, that anomaly detection with AI, you're going
to find those things a whole lot faster. Be able to tell a story better, be able to go
enrich that data better, right? You see that IP address, it's been that same IP address
for 50 days, and now all of a sudden it's changed. But then you go and do some intelligence
on it and find out that it's the same location from the same provider. So, you know, they just rotated their IP.
Or you see that, you know, now it's some new location, 3000 miles across the world.
That might be, you know, some sort of issue.
But what if that, you know, 3000 mile difference was San Francisco and New York,
and you correlated it with a calendar to see that that person was traveling?
You've just solved the answer there in seconds
that sure, you can write a bunch of API calls
to enrich all that data, but if you give AI that data,
it's going to find it much faster
and require less humans to connect those dots.
And that's where I'm seeing some of that
in a lot of the SOC analyst type stuff.
And I'm curious, on the flip side, what are the things that you think make it more
difficult?
Like what are the blockers?
What types of issues are in place?
Or what type of things you allow or let this technology be applied to?
We're like, hey, the risk is too high.
We don't trust it enough.
Or it's missing these features.
There's this huge promise, broadly, that it's going to upend ecosystems and change everything.
And then at the same time, it's like we go look at the microcosm,
ignoring the top level stuff.
The microcosm is like there's a distance here.
So I'm kind of curious to understand, from your experience,
what's the gap?
What's the gaps we have to close for us to realize some of this stuff
and really apply it into real production workloads?
I'd say that some of that can already be there. to apply it into real production workloads?
Some of that can already be there.
People just need to do a better job at defining what a production workload is.
Automated decision making, probably not a great place.
However, automated decision making with explanations, maybe that is the right answer. So, you know, we're working on one that isn't necessarily AI, but it kind of can be in the
future where we're taking about 10 different inputs on a user's activity, on an employee's
activity, right?
And auto approving decisions based off those.
And right now it's mostly API calls and binary decisions.
But as we enrich that with more and more data points, maybe instead of 10, I want 8 out of 10. And based off these factors, you know, weight each one of them, that could
be done with AI, or it could just be done via an algorithm, right? And so sometimes
you have to think about that, like, how complex is the problem domain? And how can you solve
it? Right? I would say that the big thing there is,
do you want a machine making automated decisions?
And what is the outcome that you're expecting?
So, like, I want an output for every automated decision
that says, what was the criteria for approving that?
Not just, you know, yes or no.
Yes, because, no, because, give me those details. And if you can do that, that's a great place for, you know, yes or no. Yes because, no because.
Give me those details.
And if you can do that, that's a great place for, you know, AI to be.
But you also have to be careful there because, you know,
it's easy when we're talking about security and hard data points.
But a lot of the things that we have to do in life aren't hard data points.
That's where it gets tricky, right?
Do I block that connection that I'm seeing from Utah when this user
is normally in California?
If I say yes to that, what is the outcome?
Now somebody can't do their job.
What if I kick somebody out of a zoom meeting?
Are they able to get back in?
And if they weren't able to get back in, can they still do their job?
Maybe now everybody has to go do a another meeting.
I don't know, right?
There's tangible impact there that we have to think about.
And AI is not there yet.
So you need it to include that decision-making process
in whatever outcome it gives you.
I think you already mentioned that AI, at least in your view,
isn't generate code and just write code fully for you, right? But I think with the power of AI, at least in your view, isn't generate code and just write code fully for you, right?
But I think with the power of AI, like you said, many things can be automated and often
doesn't even really require a huge amount of teams to produce these automations anymore.
So I'm very curious on your point of view.
There's always this like question like, should we buy or should we build?
And how much should we consider our resources to do certain things here?
Do you see yourself building more?
And what are some of the thought process you have here now when it comes to deciding?
So we're doing both, right?
And it kind of depends on the domain and the type of knowledge that is needed.
So like IT, right? We're training a bot on all of our help documentation so that when somebody
posts a ticket in Slack, because we're huge in Slack ops here, right?
Hey, I can't do this thing in DocuSign.
Well, we've got a whole article on like how to do that thing.
You know, AI is great at doing that.
If we can train it on those documents, on those help articles that are specific to our
use cases, specific to our procedures, that's great.
You know, that one is mostly just interconnecting things like a Slackbot to chat GPT and building
it on our knowledge base.
But then we've got other things that security hasn't done, but engineering here has, where they've built their own AI
chatbot that is trained on our code base to help answer questions, point people in the
right direction.
You know, I'm looking for a function to do X. Have you looked at this on line seven in
this file?
AI is great at that.
Again, context, right?
It has all of that kind of stuff, but it needs context. And that's part of a problem where some of this code that it's been trained on,
talking to broader AI ecosystem is bad because it's trained on bad code.
Insecure code, code lacking context, because just giving it a JavaScript
function that doesn't have context on what you're trying to make it do, or why
that's important to the business,
AI can't do that well.
And it's not that it can't, it's that it wasn't designed to do those things.
You didn't give it the context.
You just said, write a function that does X.
And then you expected the function to come out and meet your business case.
So, we're going to jump into our favorite section here,
which is called the spicy future.
Spicy future.
Coming from your background and what you're doing,
what is your spicy hot take that most people don't believe in yet?
Oh, I don't know if most people don't believe in it,
but I think AI is mostly hype.
I think that too many people are using AI as a buzzword
and it's just going to kind of go away.
There's a lot of times where a simple if-then statement
is better than an AI algorithm.
And in fact, people keep using the word AI algorithm
and not knowing what that means.
So like AI is overhyped and most of it's going to go away.
And I don't know, I'd probably say the next year.
There's definitely some like real AI and real use cases,
right, but looking at most of these things out there
that are just slapping AI on, it's not really AI.
Or if it is, it shouldn't be.
Where do you think this is going to stick around?
Like when the water recedes and the hype fades back,
where are the actual use cases
where this will stick around in organizations and where you'll say, the hype fades back,
Image generation, do we really need that? Do we really need AI doing our art for us?
There's a meme going around, right?
Like, I wanted AI to do my laundry and dishes, not my art.
Now it's doing all of our art and music,
and I'm still stuck over here doing the laundry and dishes.
So like, we're gonna see some of that creativity stuff
go away because we don't actually want it as a society.
We're taking away what it means to be human.
Of course, that's my personal opinion, right?
But I think that context analysis is a great thing.
Going back to that SOC analyst thing, taking in a huge amount of data, summarize it into
a story that makes sense for me.
My SOC analyst could do that.
Might take them a couple of hours, and AI did it in five minutes.
That's great place.
Image recognition, right?
Real time streaming video recognition
is a great place to see it.
Super interested in see how some of this works with deep fake
technology, right?
It's getting way easy to make deep fakes.
And it's also getting easier to detect them.
So we keep playing this cat and mouse game,
but that's been security for its entire evolution, right? Even pre technology, if you think about it,
right? So I don't have a clear answer there, but I think that we'll see it go away on the
creative aspects because nobody actually wants that. And we'll see it increased in like the
context analysis summary kind of things, drawing parallels to information that
maybe we couldn't do or didn't see as humans.
And I was very curious too, because it's really interesting.
Like you mentioned context analysis, looking at data, summarizing it, like
machines and AI can able to do as much for faster, obviously accuracy and who's
on the line still really, really matters.
But you also mentioned a lot of your job is talking to humans to collect that context.
Do you think there will be also this sort of like human data that's in their heads?
That conversations can even go through AI, maybe a chatbot, maybe actually real AI talking to you.
But like, do you think that will be something
will come very, very soon?
Instead of talking to you directly,
there's going to be AI talking to you like,
oh, I started PR, I started what you're doing,
tell me more, even those questions
that you were just even just talking about.
Can AI help to do a bit more of that?
Because I'm very curious because none of that example
is kind of mentioned that side yet.
Do you see any proof of that?
There's actually a few companies out there already doing that.
And we're not there yet.
Context windows aren't big enough.
So I do think that we will get to a point where we can do that.
Right?
Like have an AI agent, right?
With every person, right?
There's been a few out there, but a lot of people are still concerned with
privacy and the misuse of data and things like that.
So there's some like skepticalness around like how the data is used, who controls that data, how much they're watching me.
But I mean, if you think about it in the like perfect, secure world, you'd have like that agent that sits right there as your, you know, AI bodyguard, if you will, watching every digital move, explaining
to you like, hey, you shouldn't click that email, that's phishing. That'd be great. But
you also have to think about the context. Who has that data? Who can see all of those
things? What happens when they are able to put the wrong inputs in there and misuse that
bot or agent? But we are getting to a point where you know the AI agent is more readily available to help people.
And there's a couple out there that have talked about you know that security agent that sits in your
corner there and helps you with your daily tasks. So there is room there, but we're not there yet.
And I'm curious like is there anything you're looking to that you need to implement to truly
realize some of this?
Like, is there, hey, it's like, this is great, but we got to have some controls or guardrails
that give me some way to manage this thing.
Like, what would help you gain confidence trying to apply this technology to new areas
or take action or do something?
Oh, I mean, we're always doing that, right?
Like I regularly tell my team to find startups that are doing these things, play with them, figure out if it actually solves problems or creates more noise,
and provide that feedback to them.
Right?
So I've been devoting a couple of hours every Wednesday to meeting with startups
just to do that kind of stuff so that like, I want to see what they're doing,
how they're changing things, you know, what value they actually provide.
And sometimes it's really interesting value
and I want to explore more, play more, be a design partner.
Other times I try to be brutally honest
but I don't really want to tell somebody that I face
that their company will be gone in three months
because it is nothing but a gimmick.
But the only way to really answer that question
is to play with the tools and play those thought exercises.
Like, what is your problem and how can we solve it?
And so I dedicate time to meet with startups just to do those kinds of things.
And I try to encourage my team to do the same.
Amazing.
And so all these startups out there, Brandon Langner at Mercury is looking to talk to you.
Brandon, this has been such a pleasure. I really
appreciate you taking the time to join us today. Happy to be here. Thanks for the invite. Love chatting.