CyberWire Daily - The Three-Layer Strategy for Autonomous Agent Governance with Joe Hladik [Data Security Decoded] and Amit Malik
Episode Date: April 28, 2026The race for AI dominance has created a dangerous imbalance between business velocity and cyber resilience. In this episode, host Caleb Tolin is joined by Joe Hladik, Head of Rubrik Zero Labs, and Sta...ff Security Researcher Amit Malik to break down the findings of their latest report on agentic adoption. The discussion centers on the Agentic Paradox. This is the technical reality that tools designed to automate high-level tasks are inherently built to find the most efficient path around obstacles, including existing security policies. A primary focus is implementing a three-layer framework for AI Operations. This model targets the Tool Layer, where agents interact with databases; the Cognitive Layer, which serves as the LLM brain; and the critical Identity Layer. The conversation explores stories in which agents, without malicious intent, have caused catastrophic data loss simply by following an optimized logic path. These instances prove that agents need not be sentient to be destructive when they lack proper human-in-the-loop checkpoints. Technical hurdles of Identity Resilience are also addressed, specifically the explosion of non-human identities that spin up and down like elastic cloud infrastructure. The episode examines the fear index regarding job security, noting that 92% of leaders fear for their roles post-breach. Joe and Amit join Caleb to explore the evolution of personal liability for CISOs and the urgent need to move from basic visibility to deep observability. This is a forward-looking briefing for leaders who recognize that, in an era of autonomous routines, the human must remain the ultimate command-and-control center. What You’ll Learn Define the agentic paradox to understand why AI efficiency naturally compromises traditional security guardrails. Implement a three-layer framework to secure the tool, cognitive, and identity components of AI. Transition from basic visibility to deep observability to track autonomous decision-making in real time. Mitigate prompt injection risks by auditing the input and output flows of the cognitive layer. Utilize ephemeral containers to sandbox agentic tools and prevent unauthorized database alterations. Manage the elasticity of non-human identities to maintain control over rapidly spinning AI agents. Anchor AI operations with human-in-the-loop checkpoints to ensure integrity during high-stakes executions. Episode Highlights Defining the Agentic Identity and Autonomous Routines Revenue vs. Resilience: The Drivers of AI Urgency The Three-Layer Framework for Agentic Defense Shadow AI and the Rise of Invisible Insider Threats The Context Gap: Why Rolling Back AI Actions is Hard The CISO Fear Index and Personal Liability Post-Breach Visibility vs. Observability in Elastic Identity Environments Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
The ones who haven't faced a major crisis, especially startup companies, for instance,
who are just starting companies from the beginning being an agentic rollout,
I think we're going to see a lot of vulnerabilities coming in.
The same old problem hasn't changed.
It comes down to maturity.
How mature is your business?
There's always going to be organizations that, you know, invest in that,
and then there's always going to be the ones that don't.
Hello and welcome to another episode of Data Security Decoded.
I'm your host, Caleb Tolan, and if this is your first time joining us,
welcome to the show.
Make sure you hit that subscribe button so you're notified when new episodes go live.
And if you're a returning subscriber, thanks for spending some more time with us.
Give us a rating, drop a comment below.
Let us know what you think about the episode.
This is the best way to support the show and it helps me understand what you want to hear more about.
Today, Joe Hladic and Amit Malik from Rubik Zero Labs returned to expose the Agentic Paradox
found in their latest report, the state of the agent,
understanding adoption, risk, and mitigation.
We discussed why the majority of security and IT leaders
expect AI to outrun the security guardrails
and why even more now are starting to fear for their job security.
Stay until the end to learn what the heck organizations can do
to address the agentic paradox.
Let's get into it.
Well, thank you again for both of you joining the podcast.
It's great to have you both again this time.
And so I'm really excited to dive into some of the findings
from the report that you recently put out
And what I want to start with is the report notes that 86% of leaders expected AI agents to outpace their security guardrails within the next year.
So my first question, Joe, I want to direct this towards you.
Looking at a high level, are we sprinting towards a cliff?
Why is there such a high urgency to adopt this technology across the enterprise?
That's a big question.
Well, I think if we're talking generally speaking, there's a lot of before we get into security and all that.
businesses are seeing a large benefit in what agent capabilities are.
So if you can get rid of the buzzwords and what is an agent,
it's an identity.
Think of like a bot that you can task.
And that bot will then have access to LLMs.
So it's a bot that can query LLMs, get an answer from it,
and then execute the task based on that answer.
So just like you using an LLM, you get the response,
and then you can take action based on what the LLM provides you.
An agent is effectively that.
It's not a replacement for a human,
but what it does is it's not necessarily like, you know,
sentient in acting on its own entirely.
It's still an autonomous routine.
That's more like a bot.
But what it is, it does use LLMs to get more information than the executed task.
So when you think about it, like from a business point of view,
That's a very powerful tool because what you can do is you can automate literally everything.
The problem comes into play when people try to automate everything.
Because you need some level of control, or I should say you need some level of command and control from a human perspective
and what you're actually building or executing upon.
So if you have a whole hive of agents performing all these different tasks,
for one, you want to know what they're doing.
and one, if they're doing the right thing.
So there needs to be some level of integrity, right?
So if we think about the CIA triad that everybody knows, confidentiality, integrity, and availability,
those three aspects are very lacking right now for any sort of agentic AI implementation.
For one, the confidentiality aspect of it is you have to have the right access privileges.
one, in order for it to do the tasks that you've set out to do,
but is it even authorized to do those tasks?
And the problem, I think, is that a lot of employees,
even the lowest-level employees,
have the power of agents at their disposal.
And this is a paradigm shift,
because when any sort of automation in the past was implemented,
it usually took leadership and leadership decision-making to occur,
to enable engineering to automate tasks, right?
Now everybody in the lowest levels has,
may have access to agents,
and they will have all this autonomy infrastructure,
but backing them up,
and they may not have the authorization or the privileges
granted to them to actually perform these tasks.
So that governance is a problem.
The integrity too as well,
like how do you know what the agents are interacting with?
That's a big thing.
as well as availability.
So availability to me, which we'll talk probably more and later,
is more akin to what we're calling observability.
So one, having visibility into what the agents are doing,
but being able to observe all the actions that they're taking.
That's another critical gap right now.
And a question for you, for the practitioner,
how are you able to ensure that guardrails are in place
when agents are designed inherently
to find the most efficient path around?
obstacles. What are those policies? What does that governance framework necessarily look like for a
business? Yeah, very interesting question, Caleb. I think the things that Joe talked about and the
issues that we have in the agentic AI space and especially when it comes to the security, because
it's a relatively new stuff. Like, you know, most of the traditional things that were there that
we used to, you know, the security community used to do might not really be applicable because
the inherent nature of the probabilistic nature of the allelands, right?
So some of the guardrails will not really be there.
But one interesting thing that we are talking about in our report is, you know,
kind of providing a very simple framework to see sort or the practitioner community that
when they see the agentic AI as a system, right, they should kind of see it as a three-layer
approach.
One is like two-layer, which basically the actual hands of the AI, you know, agentic system
that actually carry out the stuff.
Let's say if you have to write a code or you have to read a database or you have to do your
task, right?
So these are the tools that actually carry out those things, right?
And then cognitive layer is basically the brain, which is the LLN in nutcell, that actually
makes these decisions like which tool to invoke and what really needs to be there, what task
needs to be there, some reasoning has to be there.
And then another very important layer that we are kind of stressing in the report,
is the identity layer because most of the time,
people are, when looking at the agentic AI system,
they are mostly talking about the tool and cognitive layer.
There is not much focus on the identity piece.
But we believe that it's very, very equally important
that we kind of consider identity as a part of this framework.
And from a guardrail point of view, as I described,
like, if we divide the agentic system into these three layers,
then each of these layer have its own issues.
and they, you know, kind of provide different set of challenges.
Like when it comes to tool layer, we have to be, you know,
it really depends on what type of tool.
If it is a database, read and write tool,
then we have to make sure that it's least privilege.
It does not really alter anything in the database or, you know, stuff like that.
And if it is like cognitive layer, then we have to make sure that, you know,
the cognitive layer related attacks like prompt injections,
and indirect prompt injections, those are, you know, kind of mitigated.
But from a guardrail point of view, these things are evolving right now, right?
So what we recommend in terms of or the proven methods that we have also provided in our report
by the means of auditing the mainstream agentic applications like Chad GPD and Gemini,
though majority of the test bed or the test cases that we have done,
they obviously, I mean, the major provider have much more strong.
controls and the guardrails and they are not material risk for them but if an
organization is taking an agentic AI application and then they are deploying it
they should make sure that like sandboxing or the LLM guard rails and then
the firewalls those are in place and make sure that the input and output of the
each layer basically is basically properly you know audited and and another
aspect of this is that you know human in the loop should also be
there when we are, you know, kind of doing these type of things.
So that, you know, from a holistic point of view, I feel that this framework should provide,
you know, working operational models to practitioners that they can leverage in terms of
their threat modeling and deploying the guardrails for the agentic system.
Absolutely.
And another stat that stood out to me in this report was that a little bit less than half,
it was 44% of respondents said that their biggest fear was compromised agent misuse or shadow
I honestly, I'm surprised that it was 44% and not a little bit higher.
But also, only 23% of leaders claimed that they had complete oversight of the agents in their
environment.
So my question for you, Amit, on this is what does that look like on the ground?
How does a defender find an invisible agent that has hijacked and been used as somewhat of
an insider threat now?
Correct.
I think this is a much, much bigger problem.
And we have seen that with the evidence of all.
open claw or the mold board the people are talking about that became very popular in terms of the commodity of the AI agents, right?
So people were like installing it and then there were lots of risk that came out of it, like credential got exposed or supply chain risk came when people tried to deploy the plugins for those things.
So it is a very big risk in terms of looking from an organization point of view because right now the agent is basically is nothing just an integration of an LLM and then tools.
and you know, just to you do and carry out your, you know, kind of stuff or the task that you do.
Now, in terms of the observability, definitely there has to be the solutions that can really, you know,
kind of identify this type of tooling in the environment.
You know, it's easier to identify the tools that are on, let's say, the cloud service providers
because those are kind of more managed in terms of that.
Let's say if you do something on, you know, GCP or AWS and.
and stuff like that.
But if you are running something on your,
I would say on your laptop,
then it's much difficult to kind of identify those things.
Then it comes mostly on like how,
based on the technology,
how we really, you know,
going to identify those things.
Wonderful, wonderful.
So Joe, I do want to go back to you on this one.
So the report outlines that 88% of leaders
said that they wish they had an undo button
to roll back agentic actions,
but really none of them had the capabilities to do that.
From a technical perspective, why is this so hard for organizations to roll back, whether it's a compromised agent, whether it's an AI agent going rogue?
What's the guardrail there?
Well, one, I would probably say the problem is context.
Back to what I said earlier with confidentiality, integrity, and availability.
Part of the problem, I think, is the second two in this regard, where integrity and availability.
One, we don't have availability much in terms of telemetry.
So tracking what all the agents are doing, for one, is going to create an increase the volume of logs immensely.
So even if you were to log everything in every agent's doing, especially in a large enterprise, you're going to have a massive amount of logs generated.
So for one, there needs to be an approach that somehow you aggregate all of that activity, but not reduce the efficacy of what the context is provided in the logs themselves.
So for one, that's going to take some work from security professionals and others to figure out,
okay, well, what types of attributes, what types of activities do you want to track?
And how are you going to forensically trace it back to things like an agent, a specific agent or identity?
So things like agent ID, timestamps, all the classic logging metadata is going to be necessary.
But it's the aggregation that's going to be the challenge in terms of figuring.
out how to manage the volume. That's the first problem. Because without that, you don't have any
context. And without context, you can't make any informed decisions on how to act upon an agent,
whether the agent is being misused, whether it's acting maliciously, or whatever the case is.
In order to understand what it's doing, for one, you need to understand which agent it is that's
doing it. And then two, you have to understand the context of how it reached its decision to perform
the action.
So, like, you know, I like chat, GPT or any AI, you can, like, open up and see how, like,
the LLM is reasoning through its sort of thought process, quote unquote, to figure out what
response it's going to give you.
We almost need access and understanding to that rule for agents to actually see how they
reached the decision that they did.
That would be another context point to then say, okay, this agent acted unintentionally,
but accidentally deleted an entire email database.
I'm calling out an actual case that actually happened a few months ago,
right?
Where it was an inside threat technically,
but it wasn't acting maliciously.
It was just doing what it thought was the right thing to do or most efficient thing to do.
Context is important in those things, the type of situations,
especially when we start getting into the realm of more sophisticated nation-state level types of attacks
that are going to be leveraging agents to do the things.
type of thing. I still don't think espionage is going to be the top use case for agents.
Because subtlety, stealth, all of those things are incredibly important. And if you have a large
agent swarm acting autonomously, that might be a little more difficult to keep that sort of stealth
in place. But everything else, I think is open, whether it's like ransomware, destructive
attacks, critical infrastructure attack, anything like that, completely open. Because
subtlete, stealth is not necessarily
key for those types of operations.
And we've seen them already sort of starting to
occur. So what I had just said with telemetry,
that is what's going to inform us on what happened.
But how do you stop it while it's happening?
That's a different problem as well.
So it will build on top of the telemetry.
You need that context awareness.
And then you also need to develop
new detection measures
that use that sort of telemetry to then inform you that an attack is happening.
I think we're going to see an evolution of that as well,
where it'll be a combination of maybe both,
where you'll have AI's generating signatures based on artifacts that we're recognizing,
and then also recognizing new behaviors that are tied to specific types of agents
or threat actors or shadow AIs or whatever we want to call them.
and I just see this as an escalation.
It's much harder for the defender to identify every hole in the environment
and then fill those holes to prevent the attacker from exploiting it.
So the task, that's why the task is monumental.
These defenders just have more to deal with.
And it's a more complicated process to get an AI to solve that for you.
Even though AI is make it a lot simpler,
a lot of the underground context that I just talked about needs to exist
for the AI to act in a way that like,
okay, now I understand that we need to defend
this specific thing, identity space,
or different things like the network, for instance.
I'm certain humans are not gonna be able to defend it
in the same way that AI would.
So I think that's kind of where we're gonna get at
is we're gonna see more AI's performing detection engineering
type of things.
If quickly rolling out new signatures, new anomalies,
new types of things more in one package,
rather than being separate.
Right, right.
Very interesting insights there.
But yeah, I do want to shift gears a little bit
and talk about another element in the report.
And Joe, I'll direct this one to you.
But it was 92% of IT and security leaders
feared for their job security
if their company suffers from an agent-driven breach.
Now, I know Rubik Zero Labs has tracked
this idea of job security for quite a while now.
And I want to kind of note like the change
over time. So is the real fear driven by the complexity of AI or is it really more this
increasing legal and personal ramifications for CSOs post-breach? How has this fear index changed
over the past several years as Rubik Zero Labs has tracked it? I don't think it's actually changed.
I think it's actually just evolved. I think some CSOs and especially the ones I've talked to,
there's a lot of new things that are going to be in place to protect SISOs as well,
especially when it comes to insurance policies.
I've seen a lot of opportunities now where SISOs will also be covered just like other CLEVL officers.
So I think that's a positive change and one that's absolutely needed.
Because historically, I think one of the problems that's been faced is that SISOs didn't have this type of protection.
and they can get personally sued, right?
So if you're a security officer leading this organization,
and then all of a sudden a breach occurs,
and then you're personally liable for it,
that's a big implication.
And also a major deterrent for a lot of talent
to want to take on that type of position, right?
Because if you can be personally liable for something,
why would you want to sign up for that?
So I think that is a major challenge that we're overcoming right now, where a lot of damage,
the damage has been done, I think, too many. And I think that the ones who, I don't want to say
take on the most risk, but they take a lot out of it will be protected in some ways and maybe
incentivized, like, more talent to take on these roles. So that's a positive. I think we're also
seeing the transition to that right now. So that could be also a reason why there's a reason why there's
a lot of fear is that Sistles may not even know this type of stuff is available to them yet.
So there's that aspect too.
So beyond the technical, right, it really comes down to like how personally liable am I to my job, right?
Because most of us have the benefit and the privilege to separate ourselves.
So one, I think that's a major part of it.
Two, AI is certainly on the mind, right?
because it's like it opens up every pop, like potential sci-fi nightmare that I know I grew up on
as almost bringing into reality.
And like how do you fight against that?
There's, I think the one thing is to one understand what is actually possible versus the science fiction.
There's a lot of science fiction that has become truth, but there's still a lot that's still science fiction.
So actually understanding what AI is truly capable of, that it's actually.
It's nonsentient, no matter what viral news you see out there.
Like, sure, it might exist in some dark hole of the universe.
I don't know.
But as far as we know, it doesn't exist.
Right, we're not at Age of Ultron quite yet.
Right, right, which is a good thing, I think.
They are, that's why I've been really making a point to say they are LLMs.
They are extremely advanced machines that do pattern recognition.
and they produce a pattern in return.
They are not thinking like we think,
even though we do have a part of that in our own brain.
It's not the same thing.
We have to understand the technology
and really understand what its capabilities are.
And there are a lot of unknowns,
but don't lose sleep over those.
Like, that's the thing.
Take control of what you have
and know what your constraints are.
And your constraints are usually going to be,
be on your budget. At the end of the day, that's all we can do. Well, as we head into the final
stretch here, Amit, I'm going to direct this one to you. 82% of the leaders in the report
said that all of the advice is too theoretical that they're seeing across the market in terms
of agentic readiness and security. So let's get a little bit more practical. What are the three
actions that defenders can take right now to improve their resilience and their AI readiness?
Yeah, definitely. I think the technology is evolving right now. So that's why
I mean, the frameworks are getting developed and then people are starting to deploy and all these things.
So it's not as mature as it should be.
So that's why the people are, you know, kind of facing the challenge.
But from a practicality point of view, I would definitely suggest that, you know, see the agentic systems, not as a, you know, as a consolidated system, but as a part of, you know, like in our framework, we are kind of saying three layers, but it depends on like how, you know, if you are looking at attack pattern or let's say,
wider framework that are mostly driven by the type of a text that are there.
But see the problem from a different angle, like what are the components that are there.
I would rather say that stick to the architecture, just like in our case when we are saying
that tool layer, you know, the cognitive layer and the identity layer.
And then, you know, you know, decide and look at each layer because each layer is having some
different set of challenges.
and they have the ability to do, you know,
different level of damage at each layer.
So look at the threat modeling of those things by looking at that part
and then see where the, you know, the challenges are
and then trying to fix those.
Like, for example, like in two layer is very, very serious
because that is actually the one that is carrying out the activity, right?
It's interacting with your database.
it is interacting with your, you know, the internal stuff that you are trying to do.
So make sure that it is running inside ephemeral containers.
You have, you know, network, you know, segregations.
You have firewalls in place.
You are doing, you know, input and output, you know, validation of these things so that the tool does not really do anything, you know, wrong as per the environment.
Right.
So though we have described the recommendation in our report in more detail.
and we have also given, you know, some insight into like how the mainstream platforms are actually kind of implemented to some degree these controls.
So I do feel that, you know, the workspace isolation and sandbox, you know, this type of isolation is very, very essential in deploying these systems.
Yeah. Right. And Joe, I'll go back to you for this one.
what are two inconvenient truths
that every security leader is ignoring
right now that they shouldn't be about
AI and human oversight?
Well, one,
I would say probably
the most inconvenient
truth is probably the explosion of identities.
It's just
right now I think we're trying
to get a hold of like what number
it actually is. What's the ratio
from human to non-human identity?
I know
in past reports we've given numbers,
we've coded numbers
from our partners as well.
But I think we're in a point now
where it's like, well,
everybody has a different number.
I think it's become a subjective thing
and that you can thank AI for that,
especially with agents.
I think,
so that is an inconvenient truth
and I think it's obvious why.
Identity management in itself
becomes
insurmountable task
Well, I shouldn't say insurmountable.
Everything is possible, right?
What I think is happening is it's moving so quickly that it's making it harder to catch up and manage.
That's kind of what I'm getting at.
The reason is that agent identity is just like I've used this in the past, like elastic infrastructure in the cloud.
You can have VMs spin up and spin down a matter of seconds, just like identities.
That's kind of the same thing.
We're in this elasticity with identities right now.
The second inconvenient truth is what I'll go back to the telemetry piece.
There's going to be a lot of hard work here because it's a matter of like I post a few of the technical challenges.
There's a lot more.
And it's something that my team, or admit myself and others we're currently trying to figure out.
How do we actually come up with actionable telemetry for agentic AI?
that one can handle the elasticity of different agents being created and destroyed and vice versa,
all that stuff. That's another inconvenient truth because no matter what anybody says right now,
like, oh, I have complete visibility into my environment. Okay, you have visibility,
but we're talking about observability, right? There's a, and the reason is there's a key difference.
Like, you may see all of them, but you are you actually observing what they're,
doing. There's a big difference to that. And that's what I'm getting at is that's the second
inconvenient truth. It's like from visibility, get to observability. So that's the same.
Right. To your point earlier, the difference between visibility and observability is really like
the context. Like, yeah, you can see that they're there and that they're doing things, but are they
doing the things that you want them to? Are they doing things that you don't want them to? That context-rich
insight is really the difference between the two terms for sure. Well, I'm
As we close out the conversation here, I'll ask both of you the same question.
I'll start with you, Emmett.
What is the single most important message you want to leave with our listeners today?
I think my message would be that, you know, AI is coming, definitely.
Be responsible and deploy it with responsibility.
That's what I would say.
Otherwise, the consequences could be very, very, very, very dangerous.
Right.
Joe, what is your single most important message that you want to leave with everyone today?
We live in a scary world.
I think we need to take a deep breath and really reflect on all the decisions that are being made.
And I'm not talking about the greater, there's a lot of things happening.
I want to scope down to just the AI problem.
But I think it can relate to everything else as well.
Reflect on the decisions you're making.
There's a lot of doomsdayers as well.
as there always has, always has been with pretty much anything. That's unhelpful. When you're
constantly talking about how things are going to be destroyed, how jobs are going to be lost,
how the world's going to end, that's never helpful. Just remember that and reflect on it
and realize how we can leverage AI in a positive beneficial way. I want the benefits to be
realized from AI for the world. But I think security has a
major position
to live in
within this
because without us,
without us
protecting critical
infrastructure,
hospitals and
things like that,
you still need
humans to do
this type of stuff.
Because we're building
these things.
We should be in the
loop on these things,
right?
It's like,
I'll end it on this,
right?
Like,
it's like organizing
a party and no one
inviting you to the party
you just organized.
Okay?
You're being left out of the loop,
right?
No,
you want to go
to the party you organized.
right? It's the same thing with AIs. Like if we're building all these things, you should be very smart
in terms of looping in the human at different checkpoints to make important decisions. And that's the
message I want to leave on. It's like, I'm tired of the doomsayers. I'm tired of like this apocalyptic
scenarios. There's always things going down. And there always will be. But there's also a lot of
positives in the end of it. So I'll leave it there. Right. Well, what a wonderful sentiment to leave us on.
Amit, Joe, thank you both for your time today.
It was really great having a conversation about this report.
For those listening in, we'll link to the report in the show notes.
And if you want to check out anything more about Rubik Zero Labs, you can check out their website.
And thank you again for joining us, both Joe and Amit.
And until next time.
All right, thanks, Caleb.
Thank you, God.
That's a wrap on today's episode of Data Security Decoded.
If you like what you heard today, please subscribe wherever you listen and leave us a review on Apple Podcasts or Spotify.
Your feedback really helps me understand what you want to.
hear more about. And if you want to reach out to me directly about the show, send me an email at
data dash security dash decoded at n2K.com. Thank you to Rubik for sponsoring this podcast. The team at
N2K includes producer Liz Stokes and executive producer Jennifer Ivan, content strategy by Maya and
Plout, sound designed by Elliot Peltzman, audio mixing by Elliot Peltzman and Trey Hester,
video production support by Bridget Kreeky Wild and Sorrell Joppy. Until next time, stay resilient.
