CyberWire Daily - AI in the GRC: What's real, what's risky and what's next. [Special Edition]
Episode Date: November 30, 2025Join us for a timely and insightful live discussion on the evolving role of artificial intelligence in governance, risk, and compliance. Host Dave Bittner from N2K | CyberWire is joined by Kayne McGla...drey from Hyperproof, Matthew Cassidy, PMP, CISA from Grant Thornton (US), and Alam Ali from Hyperproof to explore the current state of artificial intelligence in governance, risk, and compliance. The panel will discuss what AI is truly doing well today, the risks and challenges organizations need to watch for, and how AI is poised to influence the future of GRC. They will also share practical insights and real-world guidance for teams looking to adopt AI responsibly and effectively. Don’t miss this timely conversation as our experts break down what’s real, what’s risky, and what’s next in AI for GRC. Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
You're listening to the Cyberwire Network, powered by N2K.
Ever wished you could rebuild your network from scratch to make it more secure, scalable, and simple?
Meet Meter, the company reimagining enterprise networking from the ground up.
Meter builds full-stack, zero-trust networks, including hardware, firmware, and software,
all designed to work seamlessly together.
The result, fast, reliable, and secure connectivity
without the constant patching, vendor juggling, or hidden costs.
From wired and wireless to routing, switching firewalls, DNS security, and VPN,
every layer is integrated and continuously protected in one unified platform.
And since it's delivered as one predictable monthly service,
you skip the heavy capital costs and endless upgrade cycles.
Meter even buys back your old infrastructure to make switching effortless.
Transform complexity into simplicity and give your team time to focus on what really matters,
helping your business and customers thrive.
Learn more and book your demo at meter.com slash cyberwire.
That's M-E-T-E-R dot com slash cyberwire.
Hey, everybody, Dave here.
Today, we're bringing you a conversation from a recent webinar where I sat down with
Kane Mcgladry and Alam Ali from Hyperproof, along with Matthew Cassidy from Grant Thornton.
We talked about how agentic artificial intelligence is starting to reshape governance, risk, and compliance.
Organizations are beginning to experiment with AI-driven agents, and there's a lot to navigate.
There are promising new opportunities and emerging risks.
We want to bring this discussion to the podcast so you can hear firsthand how these experts are thinking about the future of GRC in an AI-powered world.
Let's take a listen.
Well, welcome, everyone, and thank you for joining us here today for our webinar.
We are excited to share our information with you today.
My name is Dave Bittner, and I am the host of the CyberWire Daily podcast, and it's my distinct honor and pleasure to be the moderator of today's panel.
I want to introduce our panelists here today, but before I do, just a quick reminder of our topic.
We're talking about agentic AI and GRC, governance risk and compliance, and this kind of brave new world that we find ourselves in when it comes to the blending of those two things.
Let me introduce our panel here today, beginning with Matthew Cassidy.
Matt is a partner for Risk Advisory with Grant Thornton Advisors.
We've got, I'm sorry, we've got Cot Cain.
Easy for me to say, right?
I kept wanting to say Kanye, but I knew that wasn't right.
It's Kayne McCladry.
Yeah, no, I bet you get that a lot.
Kane McCladry, who's Sissau in Residence at Hyperproof,
and then Alam Ali, who's Senior Vice President for Product Management at Hyperproof.
Before we dig in here, can I just go around the table and get each of you to do a little
brief introduction of yourselves for our audience?
Give us an idea of how you got your start in this.
business and what led you to where you are today. Matt, let me start with you. Yeah,
thanks, Dave. Really appreciate you hosting us today. And so, you know, I've been a consultant
my entire career. I really got into it from an IT audit standpoint. I was a MIS major in college.
So that kind of naturally led into some of the IT audit. And really what we're seeing, you know,
from IT audit to AI is that there's so many risks that are sitting within AI, you know, data
risks, business risks, all sorts of things. And really the best place to do that is, you know, with
the IT audit row. So I'm leading the charge for the firm around the governance aspect. We've got
other teams that do, you know, it's more of the programming things, but I'm really focused on
the governance aspect. All right. Kane, how much you? Fantastic. Thanks, Dave. So I was a theater
kid who transitioned into cybersecurity because that is a natural transition to get into government
consulting as your first real adult job. Since then, I have done an executive
advisory work on three continents. I work with the I AAA as a spokesperson on cybersecurity and as
the CSO and residents at Hyperproof. The reason I came over was a lot of the clients that I was
advising had a real problem of measuring like, are we actually reducing the controls? Is the budget
we're spending as a CSO? Is it good value for money? And when I was a CSO at a defense industrial
based company that was audited, oh shoot, sometimes weekly, multiple times a week, super fun. We
had to figure out how do we effectively manage this so that we can communicate the business value
of our security stance. And when my buddy Matt said, hey, we're going to make a company that
will make it easier for companies to represent their security stance and to be able to better
understand it themselves internally and communicate that to their board, that company is hyperproof,
still here after three years, still absolutely loving it. And super excited about today's
conversation about AI. I just talked about this in Nashville at IC2 last week. A lot of
of interest there in doing this responsibly and effectively.
All right.
Terrific.
And, Lam, you're bringing up the rear here.
Last but not least, tell us a bit about yourself.
So, Alam Ali, I've been in sort of software, large-scale SaaS software development for a long
time.
I spent a long time at Microsoft, you know, building everything from large-scale database systems
to SaaS products at Microsoft products.
to other companies in the industry
on various types of business workflow-focused software.
So in the GRC space,
I'm trying to bring to bear all of the things that I've learned.
I've been in machine learning and artificial intelligence
since 97 at Microsoft or so.
Bring to bear, how do we really improve,
as Kane was mentioning,
the automation, the life, the time, the money,
the toil spent in the GRC process overall end-to-end.
I think with the latest tech,
we have like significant opportunity
to make real leapfrogs
in how we save time, money, and toil
across the product.
That's why I'm super excited about this.
All right.
Well, we've got a lot to talk about,
and for the folks in our audience here today,
we are going to set aside some time for your questions.
If you look in the Bright Talk interface here,
over on the right-hand side, you'll see there's a little tab button there for your questions to go in.
So if you have a question that comes up along the way, please don't hesitate to fill that in there.
And then we may even have a poll or two as we make our way through today's presentation as well.
All right, well, let's dive in here. Matt, let me start with you.
I think it's fair to say that we are seeing AI spread across the GRC landscape.
what sort of things are you seeing with your colleagues at Grant Thornton?
Where's it being used?
So, yeah, we're trying to really use it everywhere, Dave,
but really taking kind of a methodical approach.
At this point in time,
we're really seeing it in a couple places being really, really effective.
So, you know, the first really being as kind of an assistant, right?
So, you know, in consulting, we've got all different levels,
all different experience,
and trying to take a lot of that experience
from around the firm, centralize it so that, you know, we can build like a small language model
with our own proprietary data and really put that in the hands of everybody so that they can
access it. No more sending RCMs via email, you know, dropping it into a chat so that somebody
can share it, knowing who has the right level of data, things like that. So we're really trying to
see that. There's a bunch of other different use cases as an assistant. That's probably one of the
biggest things that we're using it for at the moment. We're also, you know, oh, go ahead, Dave.
Oh, I was just going to say what, what are some of the challenges that you've faced along the
way here? Are there been any roadblocks or speed bumps? You know, I think that in that regards,
it's managing some expectations on, you know, what, what AI is, what it is not. And ensuring that,
you know, we don't set the expectation that it's going to do everybody's job, 100%
accurately. It's really around, you know, what can we use it for? What have we tested it for?
What have we proved it out? You know, that's kind of the main challenge. You know, there's other
security risks out there, right? Access to data. But a lot of those security risks, we've already
seen, right, when we were looking at things like, you know, bots, when bots became a big thing
in business, it was, you know, what access does it have, right? Bot governance came up. So we're
really seeing, you know, lessons learned from that type of technology and trying to apply
its AI because we know if we give it access to data, it's going to use it even more than what's
what the bots did. So we're really trying to manage some of that expectation. Yeah, that's a really
interesting insight. Kane, I'm curious for your comments on that, particularly about sort of the
pre-existing infrastructure or lessons learned for security professionals and how you choose
what previous knowledge you can apply to these AI challenges?
Yeah, I think one of the most interesting conversations I had,
I was down in Nashville last week for I.C2 Congress.
And it was an individual who was concerned that,
look, if we open up our logging of all of our stuff,
of all of our control operation to AI,
aren't we going to get more exceptions
and aren't we going to potentially get more findings?
Because in theory, instead of doing,
sampling or just choosing a subset of the information to act on, you could say, well, let's just have continuous control observation or a continuous monitoring, which I think is this Nirvana state that a lot of, you know, GRC professionals have tried to get towards.
But I think ultimately, if you're able to proactively before you are audited, if you're able to have a few of all of how your controls have been working over a duration of time, if you start to see controlled efficiencies, like maybe, you know,
access management, that's a popular thing lately, if you were to have a continuous failure observed in your access management, and that was surfaced by an AI, it'd be a lot better to know about that sooner rather than the old way of how we did auditing, which is you waited 365 days, panicked, realized you had to put together the evidence to satisfy the requirements of the audit, and then you find out that you have a problem. And so I think the advantage of being able to use, at least from an internal
So perspective, to have a continuous observability of how well your controls are reducing
risk gives you not only confidence when you're getting ready to talk to an auditor,
which, I mean, I'm not going to say they're scary people.
They are nice people, but sometimes people get a little anxious before they talk to their
auditor about things, but also it gives you leverage with your vendors, because if you find
out that, hey, your access control management system in this example is failing on a periodic
basis. If I'm a C-So, I'm going to go have a conversation with them and say,
look, either you can give us some professional services or when you renew what comes out for
maintenance, we'll go find a different vendor. We will rip and replace the solution.
And it's that ability to have facts rather than opinions. And previously when we've tried to do
this, the challenge is staff time more than anything else. Every time we've had to go ask
like the SEC ops team, hey, can you show me the evidence that the thing you're doing for a living
is doing the thing that it's supposed to be doing.
I mean, it's, first of all, kind of a bit of an off-putting question
from second line of defense to first line of defense.
But also, SECOPS teams just don't have the time.
And so anything we can do to automate that evidence collection,
which we've done before we had agentic AI,
before we had AI, and it's often just an API call.
But the ability that we can have now,
when we're preparing for audits,
is take all of those pieces of evidence from desperate sources
and put them together in a way,
that's fairly friendly for auditors to be able to consume and actually allows them to focus on
strategic matters rather than asking, hey, can I have another screenshot, which I don't think
Matt ever gets excited about chasing people for screenshots, right?
Never.
Yeah.
I mean, like, wasting people's time is something that we hate as well.
But, you know, we need to get that, right?
Because we can't forget about our basic audit principles around, you know, objectivity, you know, is the,
you know, source that we got it from? Is it objective? Did we get the right thing? Is it complete
and accurate, right? To we gather the whole population from whatever the report is that's coming
out of the system. And then also when we're using the AI, making sure that's auditable as well, right?
It seems a little recursive to a degree, but keeping those principles in mind, especially around
time-saving, you know, I think there's a good, there's a good mesh there.
Alam, let me bring you into the conversation here. What are your insights when it comes to
these adoptions into people's workflows.
Any particular places where you're seeing success and challenges?
Well, the area that we are seeing that companies really want to adopt AI is around control
automation, right?
This is sort of the largest area of time and money that folks are putting into trying to automate it.
And what really folks are wanting is the end-to-end automation, right?
To automate a control, there are probably about 15 to 20 different steps that are required.
So almost every single customer that I talk with today, like every single day,
is that how can you help me automate my control end-to-end?
And they're really evaluating which pieces of that automation flow are we able to actually automate.
And it's like a very serious business question that teams are asking of us today.
All right.
Well, I tell you what, I think this is a good time to go to a poll question here.
For our listeners, you'll see there's a little tab there on the right-hand side of your interface that's labeled polls.
Let me see here.
Yeah, and this just get us a better read of the room here, Dave, more than anything else,
to understand who's here and how far they've gone down this particular rabbit hole.
Because what I've heard, and I don't mean to put my thumb on the scale,
as people start voting for how far they are in evaluating this,
what I'm seeing is that companies in highly regulated or fairly sensitive industries
are both wanting to and also less likely to have tried anything to do with AI and GRC.
And the concern there is ultimately the risk that the system produces an erroneous error that then produces a regulatory or a litigatory or an otherwise unwanted like, hey, cool, the AI made a mistake and now everybody's at fault and panic, panic, panic finds.
Nobody wants that.
And I think that's what's really driving a lot of conversations right now.
If you're in a less regulated industry, it's probably very easy to adopt it.
But you don't also see the same value out of that as you would if you had.
the ability to reduce the amount of staff time and effort in people's time that they're just doing toil to prepare for audits.
All right. Well, we've got some good responses here on our poll question. I'm going to leave it open for just about another minute or so while these registrants continue to vote.
Interesting to see the distribution here. Again, the question is, have you tried an AI-powered GRC solution yet?
I'll tell you what, let me go ahead and end the poll here.
And what do you guys make of these results?
I mean, it's sort of definitely a bump in the middle there.
People are pilot.
So let me read the numbers out.
So 30% say they're currently piloting or testing AI powered solutions.
30% say no, but they're evaluating solutions.
And 36% say no, but they're planning to explore in the next 12 months.
so nobody has nobody said yes we're all in and nobody said no never is this is this about what you
would expect from this sort of poll cane i think so i think this mirrors where um the security
community and the audit community really are and we realize hey that's a technology that could
solve some problems but it also has the risk of creating some problems um and so this absolutely
it matches what my expectations are,
and it really comes to a question of like,
how are the GRC vendors, including hyperproof?
How well are we meeting the needs of the market
and their expectations of the market
as companies move into GRC?
And we definitely were not the,
we were intentionally not the first mover
because we didn't want to be like,
hey, this looks like something everybody else is doing,
so let's just do it poorly.
We had to take a more thoughtful approach,
knowing, again, that the concern is that if an AI makes mistake,
the people who sign off on that mistake, your CEO, your CFO, and so forth,
they're going to be the people who are going to carry liability for it,
and we owe it to them to not make those mistakes.
Matt, let me touch on that point with you.
I mean, I think everybody who works with AI regularly knows the potential for hallucinations,
and we've seen cases with lawyers being taken to task for going in front of a judge
with made-up references and previous case law.
Where do we stand with the ability to integrate AI into GRC,
but having that sitting on your shoulder,
that potential peril?
Yeah, and, you know, there's a lot of different aspects in that, Dave,
and, you know, kind of leveraging the poll.
I would say that probably a lot of those hold-ups
is the governance process within Oregon.
Right? Understanding the risks at an organizational level and then, you know, looking at the tools and applying some of those risks, but making it a process that's kind of, you know, streamlined and within risk appetite. But, you know, I think also with within the industry, right, there have to be certain things that are being done, right? So not only are you aligning things to risk appetite, but you're making sure that it's referenceable, right? There has to be a reference. It can't just say,
that, you know, I pulled this and this is the answer.
There has to be references on that.
That's probably the most important.
I think a lot of, like, these large language models,
a lot of their data comes from sources like Reddit,
which love Reddit, but there's way too much junk on there.
They could cause it hallucination.
And then, you know, I would say probably, you know,
the last thing is that humans, right?
We still need some of that human intervention in it
to look at some of the concepts.
And going back to the theme of a little bit of an assistant now, right, there still needs to be that critical analysis for things that, you know, may not be native to machines like, you know, ethics and, you know, making decisions that are not necessarily, you know, on either end up to the spectrum, but again, somewhere in the middle.
So there's definitely that human aspect that needs to be involved to.
Yeah, I've found it helpful just for myself to think of some of these LLMs as being a tireless intern in that I can set them on any.
any task and they will do it quickly and with as much depth as I request, but I'm also not going
to bet the company on an intern. That's how I've come at it. Does that make sense to you?
That's a really good way to look at it. There's always the trust but verify, right? We got that
a long time ago. And that's absolutely the case where it has to be referenceable, right? It can't just
come out of thin air. It's got to be reference. Well, then you've got to check that reference. That
person needs to check that reference.
Again, just because you publish a book, you publish a website, you know, there still needs
to be that critical eye on it, especially this day and age when there's so much information
out there.
Yeah.
Go ahead.
Yeah, something else I was going to say is I have audited companies before.
And one of the biggest challenges that I'd found, and this was pre-A.I, was you'd ask
a company, hey, show us evidence that you've got a policy document and that your written information
security policy, for example.
that you've got controls that are operating on that.
And depending on the maturity of the client
and depending on how far along in their cybersecurity journey
there were, you'd get answers immediately.
And you'd get answers a week later going,
we're not sure what you mean or we can't find it.
And so something I think that is a potential force accelerator
is that if you think about it,
a policy document could be on maybe a SharePoint site.
It could be on a Confluence site.
It could be on your intranet.
who really knows where these things are stored
unless you've got good document management?
So if you can imagine that you had an AI
that was able to say,
cool, so here's all of our possible document repositories
where stuff is going to live.
Go figure out what this question means
and go find us something that actually matters.
That's going to make it faster to respond to an audit,
but then if you can tie it to a specific control phrase
or control language requirement inside of that policy document,
that's going to make it easier for the auditor
to have something consumable,
but it's also easier for a human to consume and go,
hey, this actually is what the auditor had been requesting.
That's just a simple thing.
Like, hey, do you have a policy that says, whatever,
maybe people have to use multi-factor authentication?
If you get into a harder example,
that one I'm sure you probably see a lot of is change management, for example.
That is non-trivial, right?
You have to bin together a whole bunch of different data sources.
You have to start at your change management system.
You've probably got some sort of ticketing.
system that says, hey, you know, we're making a change and you have to, that change has to be
approved. And then it has to go into some kind of source code repository. You have to actually
verify, did the change happen? Was the change approved? And then how did you get into your DevOps
pipeline? Was there a tool like Jenkins that you could go interrogate and say, did it actually
make that change? How do we correlate that all effectively? And right now, the state of plays,
we have people doing that. And their job is just as fun as that sounds, which is to say it's not
fun. They have to correlate all of that information and send it to auditors in a format that
they can understand and consume, or maybe they have to as part of an insurance package,
as part of their insurance renewal. They say, hey, we want to be able to prove that we do this
so we get a more favorable rate. Or if you're dealing with incident response and you're dealing
with a breach and your insurance investigator has come and asked. So how are we all doing that
beforehand, before the thing happened? Again, there's this whole bunch of manual toil.
And if nothing else, AI is really good at that thoughtless, connect these things together so that people can then start to focus on more strategic questions like, is the way we're doing change management? Does that make sense? Is there any way we could actually improve that? Are there any gaps? I'm sure later when we talk about some hyperproofs capabilities, we'll talk about that ability to do additional gap analysis to determine our controls being reasonably implemented and are they actually understandable.
Are they producing meaningful value or their additional things?
But right now, these are very hard day-to-day challenges that folks in GRC work with.
And I think, ultimately, the ability to change from that manual process to something more automated,
where it's the manual, the tasks that folks just dread doing, I think that's going to actually allow internal teams to be able to focus more on strategic matters rather than on, well,
I have to do the work, and the work sucks, and I've got to do it anyway, because reasons,
and this is the fifth time I've produced this evidentiary package for an auditor this week,
which we've heard about.
I think we can move away from that so folks can actually do what they are good at,
which is thinking about things in strategic terms where maybe an AI doesn't have context,
or maybe there's historical context, or maybe it would be, you know, what fits in your head as an analyst,
is way more than the token limit for any of the modern AI solutions right now.
We will be right back after this quick word from our sponsor.
Matt, I'm curious about people properly setting their expectations here.
I mean, as Kane points out, you know, you can set the AI on the drudge work,
I think a lot of folks make the mistake of thinking that that initial first round of output
that they get from their LLM, let's say, we're done here.
But as we've talked about, that is not the case.
So the notion of how much time and energy you may save, you have to properly calibrate that, right?
A hundred percent.
You know, you really have to understand, right, the use case, right?
We look at use cases, whether we're creating an application or we're training a new process
or using a new tool, right?
It's the use case, right?
And testing that use case and making sure that it works and the information is reliable
and things like that, right?
We're not just going to load all our problems, you know, into the system and it's going to solve it.
I think that, you know, when we're talking about, Kane was talking about, you know, like,
you know, ADO and Git and all just the change management stuff, that stuff is such a
and not that you could spend hours and hours and hours.
And sometimes auditors and developers, they're saying different things while they're
actually saying the same thing.
And being able to use the AI to take all the information and connect it, that's a really
good use case.
But again, there's some other use cases that would need to be evaluated, you know, to figure
it out because, you know, it can do a lot of high-level things, it can do a lot of low-level
things, but really what's in the middle that's a good use case that's been tested and
approved and referenceable and auditable and all those fun concepts that we have to go through.
Yeah, let's continue down that path of use cases.
I mean, what are the things in your experience, those of you who have been working with this,
what are the places where it is most effective?
Alam, you looked like you had something to say.
You were nodding enthusiastically there.
My previous comment about setting your expectations.
Any additional insights?
Yeah, 100%.
So, I'll talk about, I was nodding vehemently because this is how we think about when I'm designing product is that I'm trying to understand that deep use case of a person trying to accomplish a task, right?
So, for example, I'm trying to do a gap analysis between my controls that I have today and what proofs are missing.
That's like a real concrete task and a use case.
So then I'm going to look at, okay, how long does it typically take me to do that task?
Now, it might take me several days to sometimes several weeks, depending upon the complexity
and the number of controls.
So I think about, hey, I'm not just going to go to an LLM and say, hey, magically tell me
where all my gaps are, and then it's going to spit out one answer and it's like, ah,
ta-da, right, I have all my gaps.
That's not how I think about it.
I'm thinking about how do I enable that person, that human in the middle, with tools that
suggest to them that they can have a conversation with that decreases that many days,
like say I take 10 days, can I decrease that to five days or maybe one day?
I'm not trying to eliminate it to a minute, right?
Because if I go to my customers and I say, look, I'm going to give you tools.
I'm going to give you, you know, you've got to hammer.
I'm going to give you an air hammer
that's going to allow you to increase the throughput.
So I always encourage
when we're building software, it's not magic.
I'm trying to enable those professionals
to save that time in a very specific,
like Matt said, a use case-focused way.
Well, I mean, let's dig into some of the specifics here
at Hyperproof.
Alam, what sort of things are you all doing today
that enables you,
users to use AI for GRC.
Yeah, so I'll continue on this thinking of like how we're thinking about infusing AI through the product, right?
So as we're, first of all, when we're thinking about it, we are, I'll reinforce that we do not believe that we should just simply go to an LLM and say, hey, what are all the controls and the proofs that I need to.
today. And ta-da, magically, it's going to figure it out. Hallucination problems, all kinds of
things. We're never doing that. We're never going to rely on a generic foundation model to solve that
problem. What we are going to do is take those foundation models. We're going to constrain how we
think about the tasks to be done. Like, oh, I have a gap analysis task to be done today.
Oh, I need to create a test for this specific control. Help me figure out what the tests are.
Like, help me, the GRC professional, create the test.
Don't automatically create it for me and run it and magic happens.
So we are thinking about that.
Just connection to go get data is a huge problem, right?
I have to do this integration phase.
How do I connect to all these different data sources?
The first problem I have is what question do I ask to whom to go get access to that data source?
That's like the first basic question.
which LLM can help me do that today?
Zero LLMs can help me do that today.
However, how we're thinking about it at Hyperproof is that, well, I have some LLMs
that can help me construct an email or understand and analyze who should I talk to in my
IT department that could answer this question.
And by the way, how do I frame the question to that person, right, to give me the right
answer? I'm in the GRC world. I've got super technical people in the IT admin world. We have a
language barrier. I don't even know the questions to ask them. Using a constrained LLM can help me
achieve those tasks. So that's the philosophy we're looking at. And then the word we're looking at
is like every part of the GRC process, right? We're looking at how can we improve that from like
the discovery phase, right, of like how do I understand what?
what's happening in my world?
What are all the controls I have?
What are the framework?
What are the status of all of those pieces, right?
To gap analysis that I mentioned is like,
how can you advise me on where pieces are going wrong
that then now I can drill in with the AI tool sets
and go do specific tasks all the way to like,
okay, now can you automate that task?
They'll see, okay, so I've had good conversations with my assistant, right?
And I say, okay, you know, I have created a test, right, for this proof as an example.
And now I'd like to automate running of that test, right, on a regular basis, and then alert me when something happens.
And I can have a conversation around that.
That's an example of sort of automation throughout the process.
So I'll just pause there.
That's how we're really thinking about infusing AI.
It's not like, oh, ask a foundation model how to solve all my problems.
It's never, ever that.
Can I add on to along with one other point here?
Yeah.
Okay, so I think the other thing we've done that I'm so proud of and also is so difficult is we hear this, this phrase, human in the loop.
And that's very popular in AI, and it's meant to decrease the idea of, oh, you know, the LOM said this was the right answer.
Cool.
So we should put rocks on pizza.
I think, or was it glue?
I can't remember.
It was probably, yeah.
Sounds delicious.
So I think when we're working with the GRC,
if any of those use cases that Alam has just mentioned,
like, hey, can you build me a test and, hey, can you automate that test?
And, hey, can you show me the outcome of that test as I prepare for a better audit
so that I can have a better interaction with my auditor?
When we're talking to AIs, something we thought about is,
well, we don't want to create, like, the standard thing that everybody does in AI,
is you log all the inputs and you log all the outputs, right?
Cool.
So now if you follow any of the new AI regulatory changes and laws that are coming in place,
like the UAI Act, Colorado, or things that are happening in California, we said, wait a second.
So if we are going to produce a log of everything that the AI did, it's going to make it really hard now we've added actually work to the GRC folks who are trying to make their lives better because now they have to go review the audit logs of what did the AI actually do, which doesn't make the world a better place.
actually that just creates more work and it's kind of an anti-pattern that we wanted to avoid.
And so when we were building this, we were very intentional and said, look, like, let's have a human approve everything.
Let's have a human say, yeah, actually go do that.
So the AI suggests a test.
Cool.
It's suggested the test.
It waits for permission.
It's not going to go 17 steps down deep into automating something as a black box, as magic, and then say, look, I'm done because
that's not inspectable.
That's not something we have a high degree of confidence in happening.
And certainly we wouldn't expect folks to say, yeah, it seems great.
So as we work with the AI, folks have to approve and say, yeah, go do that, build that test,
collect that evidence, show me how that works so that it's more of a partnership.
But the other thing there is that that's allowing the test, which could take an individual person,
hours to set up, to go figure it out.
That's an immediate force accelerator for them where they're not having to spend that
time to go learn how do I collect this evidence? How do I collect this test? How do I validate
that this is accurate? Is that about fair way of phrasing how we're thinking about it? It's
certainly not the easiest way to do software development. No, it's exactly right. I also want to
add on that, you know, in this conversation, we're talking about the AI, right? To me, it's like
saying, oh, we're going to ask the drop down to go do something, right? To me, these AI,
they're just a set of tools, right? We employ the tools to accomplish a task. So the way that we
have to design, and whenever I look at these solutions in the market, the first thing I think to
myself is like, are you trying to create magic for me? Are you trying to sell me magic? Oh, our solution
is not going to hallucinate because of X, Y, Z,
but I'm going to try to automate all these things for you, right?
I'm like, okay, I'm going to probably stay away from that.
And I'm going to more focus on, like,
how are the tool sets going to help my teams,
the GRC professional team, to do the job better?
So, for example, there's a question around,
oh, I want to have an audit log of what the AI is doing.
Okay, and I thought about,
that. And I'm like, well, let's see. The audit log of everything that AI is doing is actually
what the human has told the AI to go do. So like in our design, like we have an audit log, right?
And it's like, okay, then you log. It's like, which is the user that did that thing? I'm finding
that most of those logs, there's a human and with a name that said, yes, this human approved that thing
to go happen. And by the way, that human told this process, that happens to be an LLM-based
process, to go do its thing. And he, by the way, here was the output. So the source,
the authorizing entity is most of the time the human. There are some few exceptions that I'm
finding. It's like, oh, it makes sense to start to automate these pieces. And we'll log those,
but at the source of it is always the human in the audit logs. Matt, I'm curious your take on that.
I mean, is it ultimately the human who signs off on these things who bears responsibility?
A hundred percent.
It has to be the human on that.
You know, they've got the experience, you know, the kind of ethical and moral guides on some of those things.
And they can really look at it from, you know, again, keep it in mind some of the audit things, right?
The objectivity of it, where it came from, you know,
We can use it a lot for completeness and accuracy talking about populations,
but again, there's still some variants there.
So, yeah, it still has to be the human in the loop on that.
And having the audit log that says that is even better a lot.
I love that feature.
That's great.
Yeah.
All right, I tell you what, let's do another poll here.
This one is about your biggest concern with using AI in your GRC processes.
So we've got the poll up there live now.
So take a look, read those possible answers, and please weigh in.
Let us know what you think here, and we can discuss that.
While those answers are coming in, I'm curious, there are different types of AI, right?
Machine Learning, LLMs, agentic AI.
What is the process?
What's the thought process of choosing which one is,
best for any particular task. How do we go about deciding what we're going to turn loose on our
data and to what degree we give them autonomy? Is that for me? Why don't you kick us off there?
Yeah. So the first thing that I think about is we don't give them autonomy. There is always a human
in the middle.
But what we have found,
we started investigation with all kinds of,
but we've been working in like machine learning-based models, right?
And using machine learning in various parts of our crosswalks
that we've had are jumpstart features
that we've had in the product for a couple of years now.
So there's like machine learn models.
Two, like we've experimented with the LLMs
and applying like rag models on top of that,
whether we're doing vector databases for search, right, all the way to these agents, right?
They're all different types of hammers, right, to do certain things.
What we've found lately, right, and is that the agentic model,
and an agentic model has essentially three components to it.
It's like the context or the prompt that you give it.
It's like, hey, here's a thing that I want you to go do.
Here's some context of how to go do it.
And then you give it like a set of tools.
It's like, here's a set of tools of databases that you can access.
There's some MCP servers you can go get data from.
You might be able to take some action.
Here are some APIs to go take actions on.
Those are the tools.
And then the brain is like, which LLM do you want to use?
Which foundation model do you want to use, right?
So those are the three components of what an agent is.
So we found that in the agentic model, it actually helps us move faster
and satisfy more business requirements in the GRC space.
For example, customers have this one thing that says, hey, don't train models with my data, basic thing, right?
Well, in the architecture that I just described of what a basic agent is, there's no training involved, right?
I give it context.
It gives me back output.
It's not using anything to train the models.
It's an input-output type of thing, right?
And it can go do certain things.
There are other technologies, rag technologies, right, other fine-tune models that are using that.
Those are actually more expensive.
For me, as a software engineer and developer, they're very expensive.
They actually don't help me get to my customer need faster.
So at this moment, right, we're finding that the agentic technologies actually are faster.
They satisfy my business requirements and what my users want in terms of data protection.
So we're moving sort of very strongly in that route.
But to be able to feed the context.
Remember, I'm never using a world model to say, hey, do magic.
I have to scope that LLM.
What must it do?
So I have a lot of machine learn models to scope.
What should that prompt be?
That prompt or context engineering is a lot of machine learn models
that help me constrain and understand the context.
For example, when I'm doing gap analysis,
what should I be doing gap analysis on?
What am I telling the LLM?
How am I constraining what it should be going and looking at?
What's the world, right?
How do I test that?
So there's a lot of machine learning models
that are also testing the output of my agents.
So that's sort of the basic technologies
that we're looking at
that we think are the state of the art
of where things are at.
It's going to evolve, but that's where we're at today.
Yeah.
All right, well, I tell you what,
let's take a look at the results of this poll,
And I think it's pretty clear that the dominant thought here from the folks in our audience is they're concerned with data privacy and security.
Does that strike any of you as being not in alignment with where you thought this would go?
No, that 100% makes sense.
And I want to rotate on that and maybe look at Matt and see if he's got an analogous story.
But one of the things we've had to be fairly thoughtful about doing, again, I used to audit companies.
And I remember I was auditing a company that built their own GRC platform.
And they had an engineer built their own GRC capabilities to do their management reporting and so on and so forth.
And they had a variety of controls.
And this one control always crushed it, just killed it absolutely operating perfectly.
Some of the other ones, maybe a little not so good.
Some of the other ones who knows?
Well, when we got into the audit, we found out that the engineer who wrote it, that their bonus was tied to the control that was always killing it
crushing it and it wasn't actually being accurately reported.
And that's really influenced some of our thinking about when you have an AI taking a system
output, maybe it's getting a list of users, right?
We have to be able to make sure that that AI hasn't done something untoward or
hallucinated or made an interpretation that is not correct because then that imperils folks
like Matt who have to vouch for any outputs from a system actually had not been tampered with
Because, again, that defeats the purpose of having been audited.
So, you know, we've had to be very careful and intentional, as Alam was saying, about making sure that we've got this in as transparent way as possible and that we're choosing the right hammer for the job.
Instead of introducing that data risk or that privacy risk or that security risk of, well, the AI is just making stuff up because then you're going to actually get even worse audit outcomes, right now?
Yeah.
And I love that story.
And, you know, what I was thinking about when I see this, I think that if you peeled back the layers
of data privacy and security concerns, it would also probably be they don't want to make
headline news, right?
You don't want to be the first mover on it to be a case study in one of these types of
presentations to say, this is a bad idea, don't do this.
So there's a little bit of hesitation, you know, especially social media, everything that's
out there.
they don't want to do that.
So I think that, you know, data privacy and possibly security is maybe the blanket on top of the issue.
But it's really forcing, you know, people to not make a mistake in the public eye.
That's why we always, when we're talking to clients about, you know, are you going to use this with customer interaction?
Are we going to use this in the background on maybe, you know, saving costs, just like, you know, hyper-proof.
It's always, well, right now our risk tolerance is let's use this in turn.
Let's see what we can get out of it.
Let's see how we can reduce costs with the actual ROI on this,
as opposed to we're going to push this out to all our customers and see what happens.
Yeah, and I'll just double down on this.
I'm actually really happy to see the data privacy security as such a high percentage.
Because like I was mentioning, the architecture that we chose, right,
this is the leading concern, right, to make sure that the basic architecture,
that we can preserve that those data privacy and security concerns,
like what do you do with my data also, right?
We can foundationally address those.
What do I mean?
From a software engineering perspective, I'm like, well,
if the tech can't do the thing, right?
Like it doesn't expose data privacy or security concerns.
That's the best way to address that.
So I'm going to start with that.
It's like, what's the tech?
That just doesn't even expose.
that, right? And then can I build, can I satisfy my business requirements from there? So that's why
the agentic route, right? There are still concerns, but of all the options that we've seen so far,
like starting with that, building from that is foundational. You have to build data privacy and
security, again, from software architecture. You can't put it in later. You have to start from
ground up, right? Every single element and every single layer of the architecture has to have this
from the start. I also want to come back to something that Matt raised, and it's one of the polling
responses as well, which is about where AI adds value in the concept of ROI.
Dave, you've probably seen some of the studies that came out over the summer where, you know,
Doomers are saying AI doesn't add any value when all these projects fail, and it's a terrible
idea. And I think that in, at least in the GRC space, it depends on what you're measuring.
Because if you're, you're thinking about it in like, hey, we've got this agentic AI, what's it
do for us, what it's doing is it's reducing friction. If you have a SEC ops team that has to go ask
your dev team, your DevOps team, your operations team, if you have auditors that are always
following up with them for evidence or saying, hey, show me how you log into a system, that's
taking their time away, those product engineers, those product designers. They don't get up in the
morning and get excited about sending folks evidence of control operation, that's not the way
their brains are wired.
And so if we have the capability to automatically collect that, automatically inspect it, do it
continuously, align it to the risks that the business is basing, that's time that those individuals
who honestly aren't that excited about auditing anyway, that's time they're getting back in their
day that they can spend doing the things that they love, which would be things like building better
products, building better user experiences. And if we've got the SecOps team is able to collect
this at scale without necessarily adding headcount or without necessarily making their jobs
even more tedious, depending, then you can take all of that evidence and work with your auditors
to get attestations like an ISO attestation, for example, that show that you actually are doing this
better than your competitors in a similar market space. And I'm saying an ISO cert or
at a station, as opposed to a SOC2 type 2, because most folks have got one of those already,
and it becomes a competitive advantage now.
So not only do you have better proof that your company is meeting your regulatory or your
legal requirements, your contractual requirements of your suppliers and your customers,
but also you're reducing the internal friction spent proving that.
And now when we want to move into a new market, whether it's a new market vertical,
or whether it's a new geographical market, we can leverage all.
all those controls, all those learnings, all that continuous inspection,
and be able to move a lot faster and more intentionally than going,
well, being after the fact and finding out for management,
hey, we're all going to open a new office in some new region
that's got some cybersecurity requirements that we didn't think we had to deal with
and that weren't on your budget plan.
Good luck.
We get ahead of that a lot faster.
And I think that's going to be where we see the ROI.
It's not going to be, hey, we adopted a nagentic AI.
And suddenly, like, magic happened out of the GRC program.
It's a more holistic view of how a company is operating, and there are a lot of those opportunities for inspection of value.
Well, I want to get to some of the questions from the folks in our audience here today while we've still got some time.
Someone writes in and says, at the speed at which AI processes or does tasks, how efficient or possible is it to have a human in the loop?
Yeah, I can take that.
I was referring to this little bit earlier where if somebody promises you to do things
without the human in the loop, I'm a wary of that.
So it's about the design of what are you trying to accomplish?
Like Matt was saying, it's like, what is the task, right?
What is the scope of thing to be done?
So if there is somebody out there that's saying that I'm going to do this magic for you
And you human, like whatever, however way they designed it doesn't involve the human,
that's problematic for me.
So we are not ever designing that.
So we are like the basic thing, like I was saying, is like the process of accomplishing a task,
gap analysis of controls the proofs.
It's not magic.
The human, we're assisting the person, right, to do those tasks.
There's a process we know.
It's well defined of how to go do that task.
We're making every step of that task faster.
right but yeah but if you encounter a solution i think you know the the person that asked this
where it's like yeah they're just going to magically do something and you don't have the ability
to to really have a say in that and you don't feel comfortable in that that's probably problematic
that's a great insight right what about accountability amongst teams right let's say you
have a dozen people on my team and uh everybody's using
some version of an AI to assist them to help them run more efficiently.
But then there's Bob.
And Bob does everything.
It takes him a minute to do.
He runs it through the LLM.
Bob's done, you know, wipes his hands and Bob spends the afternoon on the golf course.
Like, how do we ensure that there is accountability across the organization and a view into
individuals' processes to make sure that they're taking the proper.
steps to do things right.
So I'll take a swing of that one first, but then also, I'm sure Matt and the law might also
have some feelings. I'm not sure that's as true in GRC as in other areas of the business.
And the reason why, and my reading list is absolute fire, if you read the DOJ's
guidance, Department of Justice guidance on evaluation of corporate compliance programs,
they're assuming somebody, a person, not a robot, but a person actually
looked at this and they actually signed off on it. And then if you go to the DOJ's
sentencing guidelines of corporate crimes and, you know, if you are found and not
be in compliance, they assumed that there was a person who looked at this stuff. And if they
didn't, that's actually considered in the sentencing guidelines if you have ignorance of how
your company is operating your compliance program. So I think in GRC, we are a little more
isolated from that. And often what we see is the inverse where an organization has not implemented
enough organizational change management OCM they've said hey we're rolling out this AI thing it's
going to be great and everything's going to be so cool and they just assume people get it and then you find
Bob is actually the slow one he's doing things the old way because he didn't know what the value was to
him as a professional he might have said well i don't understand why i'm doing this because the old way of
like let's see how a user logs in well i'm going to take my stopwatch out and i'm going to hit the timer button
and see how long does it take for them to log in and how long does it take for the screensaver to lock?
And they just do those manual tests that don't make any sense in today's world.
I think that's the real risk that we have is that when companies choose to adopt these tools
without communicating the value and the advantage of how it'll allow GRC professionals to not do
lousy grunt work that nobody gets excited about.
And instead, they can spend their time thinking about like bigger, harder issues that agentic AI just can't solve.
Matt, what do you think on that one?
Yeah, yeah.
I think, you know, Bob's golf game might get a little bit better,
but he's not going to have a job for very long.
Because, you know, in kind of our view, there's always time for human review.
Now, again, being a consultant, right, we're, you know, we're a people business, right?
Dealing with people.
And, you know, to King's point around, you know, OCM, right?
You can have a policy that says there has to be a human to review certain things, right?
We have that policy.
sure a lot of people have that policy. But, you know, what are the mechanisms to ensure that
nothing, you know, hits a client desk or goes to, you know, a CEO or a CFO that's completely
fictitious, right? What are the controls that you actually have in place? And then again, also,
who's responsible for that? Is it, is it Bob who's going to be responsible for it? Is it,
you know, Matt, the partner who's taking the bullet for Bob, right? There's got to be, you know,
time, effort, controls, processes, all that fun things.
for, you know, deliverables that are not necessarily.
But yeah, GRC is very, a little bit more black and white on some of these things.
But again, there still has to be, you know, that validity, that transparency, the resiliency, all built in.
All right.
Well, let's go to another question here from the audience.
Someone writes in and asks, do you find that many organizations get stuck at the process phase,
as many people don't have good documentation of the process?
I can't see how you automate a task if you haven't documented a clear.
process of which so many processes are in my head. Does this resonate?
Looks like you're all nodding your hands. That sounds very familiar. I'm not saying that we
should have run books, but I think that we should have at least some process documentation
before you start looking at it. And the good thing is in compliance. There often is that level
once somebody's been through, like you can get through one audit manually if you do it once a year.
start getting into two, three, four, six, 12 multiple audits a year, suddenly you want to have
process documentation that makes sense. And the reason why is not because process documentation
is fun, it's because people would like to take holiday. And they don't want to be unable to
take holiday because it's audit season and because nobody wrote it down and it's stuck in their
head. So I'd say the thing to do there is, you know, as organizations mature, they start writing
that process documentation. And then as we start looking for those.
optimizations, you go, cool. If we have a consistent way of doing this, this feels like something
we could have an AI help us build a test. Let's have the AI help us build that test. And now the
process is a lot simpler. It's based on what we used to do, but an AI and automation is conducting
that for us. And we can focus on outcomes, which is what we were supposed to be focusing on,
instead of manual coil, which is what a lot of companies, especially those where it's all
written down in their heads and just, you know, maybe it's yellow stickies. That's the world
they get stuck in.
All right.
Well, quickly, before we have to wrap up here, Matt or Alam, any additional comments
on that one?
Can you covered it?
All right.
Fair enough.
Fair enough.
I may add, just quickly, onto what King was saying, is yet, yet 100%, you have to
have the process.
The AI tool sets can.
and should, by the way, help you clarify, refine,
and then establish your processes in the tool sets.
So, like, one of the key things that we're looking at hyperproof
is the advisor or the co-pilot that is looking at advising,
oh, here are the workflows you may need, right?
And then you can have a conversation around that.
You have the process in your mind.
How do you get that from your mind to the software?
So we're like, okay, the advisor is understanding that.
And then, like, trying to then create a workflow, right?
A workflow editor, how do you create that as a workflow in the software?
Now your process is documented and you can see it in the software.
That's how I think about when I'm looking at software design to help that very real ground truth problem.
Yeah, and I'd also say if any of this seems, you know, a little difficult, you want to see it actually running live.
And you're like, cool, that sounds neat, but what's it actually meaning in practice?
Hyperproof.io, just go there.
We have early access now for our AI models available.
So if you want to get a demo and see what we're doing, just hyperproof.io.
I think it's right on the homepage about getting the demo of our AI capabilities.
All right.
Well, we are coming up to the top of the hour, which means the end of our time together here.
I want to thank our speakers here for joining us and sharing their information.
Kane McGrathry, Matt Cassidy, and Alam Ali.
you so much for taking the time. Interesting conversation. We could have gone another hour here
easily, but I appreciate you sharing your expertise and insights. A big thanks to everybody at
Hyperproof for making this possible and my colleagues over at N2K Cyberwire for their part as
well. Please do reach out if you have questions and thank you for joining us here today. We do
value your time and we appreciate you joining us. Do take care and we hope to see you back here
again soon. Salon.
That was my conversation with Cain Mcgladry and Alam Ali from Hyperproof and Matthew Cassidy from Grant Thornton.
We covered how Agentic AI is creating new possibilities for governance, risk, and compliance,
and the challenges that come with it as organizations begin to explore these tools.
We appreciate them taking the time to share their insights, and we thank you for joining us.
I don't know.
