LPRC - LPRC CrimScience Podcast Special – Mahesh the Geek Episode 2: Safer retail experiences through AI with Dr. Read Hayes
Episode Date: June 30, 2025In the second episode of Mahesh the Geek, Mahesh is joined by Dr. Read Hayes, executive director of the Loss Prevention Research Council (LPRC), as they explore the evolution of loss prevention, empha...sizing the importance of prevention over response in public safety. They discuss the integration of technology, such as AI and body-worn cameras, in enhancing crime detection and prevention. The dialogue also highlights the significance of collaboration between retailers and law enforcement, the challenges of data sharing and the behavioral cues that can indicate potential criminal activity. Mahesh and Dr. Hayes also discuss insights into future trends in crime prevention and the role of technology in shaping these developments. Read Hayes, PhD, is a Research Scientist and Criminologist at the University of Florida, and Director of the LPRC. The LPRC includes 100 major retail corporations, multiple law enforcement agencies, trade associations and more than 170 protective solution/tech partners working together year-round in the field, in VR and in simulation labs with scientists and practitioners to increase people and place safety by reducing theft, fraud and violence. Dr. Hayes has authored four books and more than 320 journal and trade articles.
Transcript
Discussion (0)
Welcome back to episode two of Mahesh the Geek.
Today we'll be talking about crime prevention and retail loss.
Our guest is someone whose work has profoundly impacted how businesses and communities approach safety and security.
He's the director of the Loss Prevention Research Council,
a renowned organization dedicated to developing and disseminating effective solutions for retail crime.
With decades of experience, he's not just a researcher, but a pragmatic innovator, dedicated to developing and disseminating effective solutions for retail crime.
With decades of experience, he's not just a researcher, but a pragmatic innovator,
constantly bridging the gap between academic theory and real world application.
His insights are sought by companies both big and small, and
his contributions have helped shape industry best practices globally.
He's published countless articles, led groundbreaking studies, and
is a true thought leader in understanding the motivations behind crime, and more importantly,
how to stop it. We're incredibly fortunate to have him here today to share his wealth
of knowledge. Please join me in giving a warm welcome to the one and only Dr. Reed Hays.
Dr. H Hays.
Dr. Hays, maybe we can start off with just your background. How did you get into the loss prevention space?
What got you excited about it?
So I started out as a freshman in college,
saw a notice on the board for a job at 2.85 an hour.
Okay.
And being a store detective at a department store,
a big department store in Orlando area, couldn't stand it, I had to go do it. And I've store detective at a department store, a big department store in
Orlando area, I couldn't stand it.
I had to go do it and I've been hooked ever since.
So, you know, we had fun learning what people do that's not so good and what we
could maybe do about it.
So I learned a lot of life lessons there, a lot of life's lessons, saw some
conflict, I've still got some scars and literally from the rodeo light atmosphere.
Sometimes it happens when you're trying to detain somebody.
But I went to the Orlando police academy,
but I wasn't totally hired.
I was reserved right then.
So it gave me flexibility to go through amazing training
and get to do a lot of amazing things with law enforcement.
I was undercover for a couple of years in a high school
as a deputy sheriff and law enforcement officer
in a task force.
I was in uniform a little bit, but I knew I wanted to go back.
I had my two year degree, now I wanted to finish up,
went to the University of Florida in Gainesville.
And so while I was there,
I got to do two super cool internships
because I had been a sworn officer.
It was certified by the state.
One was with the Florida Department of Law Enforcement,
FDLE.
I got to travel the state, work with the agents,
with US Customs flying in helicopters at night off of the coast of Miami, just work bike week in Daytona
and just see some amazing things and learn a lot of things.
And then came back to Gainesville and did another internship,
entered the Sheriff's office, Alachua County.
So those were all, I think, life-changing things for me.
Then I went back into retail loss prevention at Ross stores.
I had the East United loss prevention at Ross stores.
Um, I had the East United States is time went by, had an opportunity to be the VP there, but I wanted to go into research.
I got the bug.
My dad and grandfather are, were physicians.
I saw what evidence-based practice was supposed to be.
We didn't have any of that in our field.
It just did stuff and you, and you benchmark with each other and they were all
benchmarking on things they weren't sure worked. So long answer to your very short question,
but that's how I got it. No, that's a great answer. So how did you end up at
University of Florida and what got you started there with the LPRC?
Well, you know, I had gone to UF. I graduated, my grandfather and father did, my brothers
and son and cousins and aunts, a long tradition at the University of Florida.
But there was a professor that was hired by UF as an eminent scholar.
I approached this new eminent scholar and he's like, come on to Gainesville.
I was down in central Florida then, brought me up and he and I put this study together
and it's still in existence today called the National Retail Security Survey.
So that got me into the research bug even stronger.
I went and got my PhD.
He was on my committee.
So I was then hired part-time by UF 24 years ago, but 25 years ago we started the LPRC.
Ten retailers came and asked me to start it and they were the big ones.
It was Target, it was Walmart, it was Gap, it was the Home Depot and so forth.
And so those things started coming together.
So here we are 25 years later, 24 with UF.
I later went full-time.
I never went on the tenure track.
So a research science track, a scientist track is you're still full faculty, but
you've got a research mission, some teaching, and so I love it.
I'm over in the Wertheim College of Engineering, very well ranked engineering.
I'm not an engineer, you know, behavioral scientists, but I thought it was brilliant
for them to ask me to move over there.
I'm working with, I mean, world-class engineers and other computer scientists and so on.
So when I hear about loss prevention research council, the prevention part is super
interesting to me because whenever
we think about public safety and enterprise security, oftentimes we talk about response,
we talk about investigations, we don't talk as much about prevention.
So what was the prevention focus when you started the LPRC?
You know what's interesting?
When I first started that job with the Robinsons of Florida, the department store, we were
security, literally.
And then really within a couple of years, this term loss prevention started coming out. The idea
was to prevent these things, not to be so responsive. I think with prevention, it's probably
a legacy term right now, loss prevention, but we're sticking with that because prevention,
I think that's where you're going Mahesh, is critical. In criminology, we have situational
crime prevention by Ron Clark, who became the head of Rutgers University's criminology, we have situational crime prevention by Ron Clark, who became the head
of Rutgers University's criminology and criminal justice program.
But through our research and publications and journals have tried to extend what Ron
and other eminent criminologists have put together as a framework to action prevention.
What does it mean?
We're trying to make it more difficult for an offender to harm somebody else.
Another one would be we're trying to increase their perceived risk of
detection and apprehension and some consequences, right?
Sanction.
Another tool would be, we're trying to reduce potential reward or benefit for a crime.
Right?
So you can see these kind of, let's say modes of action, just like a medication
might have modes and mechanisms of action, there are things that they are doing to
have an effect, the same thing with a crime prevention effort, a tool or a process is it should
have a mode where effort, risk, reward, for example, or reduce excuses like
signage, it's don't do this or a preschooler, here's what you don't do in school.
Now you know.
So we've removed the excuse, right?
So you've got this logical framework that's evidence-based.
And so we go out there and try and convert that, operationalize that, and then conduct
rigorous measurements to try and dial in the best way to do that. So as a behavioral scientist,
what did you believe when you started this work? What are the elements that you were thinking about
that lacked research that you felt you needed to go deep into? You know, it's a really neat question because the genesis of the Lost Vengeance Research Council
and all of these, all that's followed came from King Rogers, the then vice president of assets
protection at Target. And he came to me and said, draw me what you're thinking, young scientists.
And I put these zones of influence. What's the opportunity to influence an offender's choices, not only at the
target, but farther and farther away from the target, early intervention.
What can we do to leverage that and to help my team get better?
And so the first place I said, let me talk to the target audience of our, of
our deterrent countermeasures offenders.
So he assigned me to work.
He said, I'll fund it.
I love it.
Let's figure out how to use this.
He assigned me to work with his district asset protection team leader
in central Florida, Marvin Ellison.
Marvin today is the CEO at Lowe's Home Improvement.
Oh wow.
Yeah. It's kind of a neat, but anyway, so we go down this adventure when
his team would catch people, they'd call me, I'd race to the scene.
I try to interview them, but I would try and snowball sample them and say,
Hey, who else do you know?
They weren't getting in trouble by talking to me.
They knew it.
So I was able to get out into the wild and talk to a lot of people that
weren't being caught, they were doing a lot of damage and we learned some very
critical things that would improve what and how Target was trying to prevent
crime, right?
Particularly theft and fraud.
So that was the genesis.
It's always why, why not?
Why do you steal or not?
Why do you steal this type and not this type of product?
Why do you use this technique and not that?
Why do you go after this brand of store, not that song?
So going back 25 years and now thinking about today, what learnings do you
believe have most significantly affected your thinking about today, what learnings do you believe have most
significantly affected your thinking about the problems that we face today?
I think there's two is it, it's gotta be a behavioral approach.
It's a cognitive approach.
It's a human issue with a human solution.
We are humans that perceive certain things and respond.
We have an opportunity to change a little bit about what they perceive, how it might affect them negatively so that they don't harm
somebody else.
We've got a ton of evidence now to even randomize controlled trials.
We've done 32 big scale place-based randomized controlled trials.
I think the second is what I'm here today with Motorola Solutions, and
that is public and private.
What happens at a given place and time didn't start there and
probably isn't going to end up there.
We've got to take a community, a neighborhood, block a center or
parking lot, whatever, a specific place approach and understand how we detect
and affect people in that environment better, but we got to do a joint loop.
Very interesting.
And so maybe that's a natural segue to think about this effect connect,
detect model that I think you and your team have pioneered.
Maybe go into that a little bit, explain what that is.
Sure.
I mean, so I'm called a scientist, right?
And you are too.
But what we've told people a long time is we don't, science isn't a deity.
We don't worship science, but we leverage the scientific method, that process, a little
more rigorous, right?
And so the approach is pretty simple.
There's a logic model and there's observations or evidence, right?
And I think what we realize is we need to go back to situational crime
prevention and what we called crime scripting, you know, every.
Burglary could be a little different, right?
The entry method into let's say commercial burglary into a store.
If is it through the roof?
Is it driving a vehicle through the front of the store?
Is it a sledgehammer?
So we wanna strip these crimes out
and use the same method
that my physician scientists colleagues use.
There's an initiation, a cancer, a virus,
and then there's a progression through space and time.
We've got the same thing with a criminal offender. There's an initiation and then there's a progression through space and time. We've got the same thing with a criminal offender.
There's an initiation and then there's a progression.
So how do we affect that?
And then how do we earlier detect that?
Just like a pathology in medicine.
So that's how that framework, this is a logic model.
We say, look, let's come up with an easy to learn
and leverage heuristic, a tool, a framework.
And so crime scripting helps situational crime prevention,
the interventions and how we think about them and how they might make it
harder, riskier, less rewarding for a bad guy.
And then let's combine the two and then let's come up and let's rigorously test
and evaluate and improve along that.
Let's say if we want to detect an active shooter or an armed robber,
whatever it might be, a theft by a booster for an organized retail crime
group, if they're running their mouths online, they're posting things,
they're bragging, they're selling things, they're coordinating, they're liking
and reposting threats and other things.
If we could pick that up way before shots are fired or the crime or after that,
that's good.
So now let's array our sensors to pick up digital, aural, visual, textual
indicators, signals along that journey to crime during that crime and post crime.
So if I think about the loss prevention functions within an enterprise, a retailer,
even if I think about public safety, they get into different parts of the incident
lifecycle, how do you think it works today today and what should it be in the ideal
world for it to work better?
Now, you know, what's interesting is that if you look at public and
enterprise collaboration, retail, in my opinion, in our experience now, had
the best cooperation with each other, by the way, retailer to retailer.
I mean, you've got Lowe's, Home Depot, CVS, Walgreens, Rite Aid.
They work together all the time with us and within our working groups. But also in law enforcement, after the crime event, forensically, we have organized retail crime teams. Most of them,
or a lot of them, are prior law enforcement. And so they have excellent relations. So there's a ton
of cooperation. It was just during the event where there's a huge opportunity here with real-time
crime integrations and then pre-crime interventions and understandings. So we're building these
dashboards that have multiple, multiple layers, all the agencies calls for service, all their RMS data,
the retailers recorded incident information. You've got anything from animal control to
code enforcement,
you know, fire rescue calls for service.
It makes sense.
You want to know about all in one place.
They go and explore with our team.
So we're now before, during, and after events, trying to get that coordination
is where we are right now.
And I'm curious, many times information flow in this case tends to be very linear.
Right.
Somebody does one thing, then there's an exchange of information,
then somebody does another thing.
Public private partnerships, the ability to exchange information in real time.
Has that in any way changed how enterprises and public safety
professionals interact today?
And maybe to make it more concrete, things like video.
It used to be the case that video was very much something that was within the context of an enterprise. If something bad happened, you would
export a chunk of video and then you would share it with public safety or
whoever else. There would be an investigation. Now there's an
opportunity perhaps through security operations center, through real-time
crime centers, etc. to exchange information in a more fluid way. With
those types of capabilities in place, do you see that collaboration
being one where loss prevention professionals and public safety tend to
work closer together in real time versus sort of almost a relay race where
they're handing off the baton?
I think we're getting there.
And I think Mahesh, the way we're looking at this right now, trusted partners,
right, partnerships, partnership, trusted partners, right? Partnerships, partnership, trusted partners.
Here are the parameters and so on with conditional sharing, right?
Got it.
And a case study real quickly, going back in time.
I'm a district loss venture manager at Ross stores in central Florida.
One agency, I would tell my store detectives, if you have some raw Intel,
you think you saw this, they got in here, you call them and you say, this is Intel.
Uh, how are they going to use that information?
Right?
There's a sender and there's a receiver that need to have that mutual trust with
one agency.
We couldn't really share with them.
Another agency got it Intel.
If I see it noted, you know, so I think those that's just a simple one example,
but that's part of what's going on in all these agencies,
but all the retailers have lawyers.
A lot of them are trained, just say no.
How do we get to yes to create safer places?
So we're trying to figure out
how do we create trusted partners
and then the conditional sharing like an app on our phone,
never on, only in use, always on.
Who's got what access to what?
This is why, again, I think with Motorola Solutions and all your customers and partners,
helping us think about that and do some good rigors testing and get some wins, show some wins,
safety value compared to whatever exposure you think you might have.
Makes sense.
So it looks like there's that public-private sharing aspect of it.
But I think one of the things that you mentioned earlier on is just members of a vertical.
So retailers, them sharing information with each other.
How does that happen today?
How good is it?
What can be done more to make it better?
It's probably in some respects farther along than with public.
Depends again on the venue.
There's some retailers that share everything with law enforcement.
And there are a lot of retailers that will share everything with the other retailer. A lot of
it's, hey, I'm sending it your way, you didn't get it from me, whatever. Or I'm going to have
to trust you on this. There's a lot of that informal, hey, we know each other, I trust you.
But we want to get probably at a higher level of transparency if we need to. That makes sense.
And is there an opportunity to take some of those learnings and almost build
templates or dynamic templates out of it that can be...
Almost think of this as like an antivirus system,
like periodically your antivirus system downloads a threat definition file from the cloud
and says, okay, these are the patterns of malware that I want to go detect.
Is there an opportunity to do something like that or does something
that like that already exist?
I don't, I'm not aware of anything that really exists like that.
Yeah.
I love the idea.
You know, Clay Cassard on your team, he's amazing for our team to work with.
He just gets both public safety and enterprise now, but we're working on a
playbook on body wearing cameras, right?
Okay.
We're surveying all these retailers that are trying them or that have now adopted them mostly in a limited use cases and put together an initial playbook on body wearing cameras, right? Okay. We're surveying all these retailers that are trying them or that have now adopted
them mostly in a limited use cases and put together an initial playbook, living
document.
I think a playbook around what, what we're talking about right now would be critical.
Super interesting.
And so now if I think about going back to the effect connect and detect models a
little bit again, right.
A big part of this seems like on the connection aspect, it's this retail information sharing,
also now with public safety, perhaps getting that going as well.
And then detect becomes then the next important thing.
Okay, I've shared this information.
I want to be able to action on that knowledge that I have about what is it that I should
be looking for.
AI has progressed so significantly over the past two years.
What do you think is possible today that perhaps wasn't possible three years ago, four years ago?
And we could probably have a series of episodes on it.
One thing like, for example, this morning I saw with your newest version,
and I'm just using a body worn camera, but where the officer can take note, this is happening.
And I can remember back in the day, field interview reports where I see this guy,
he's by this gas station is 3 a.m.
and you go get their information.
And then should something have happened that night, the next morning, here was a
guy that was there, that type of thing where there are more law enforcement using
that technology and enterprise, more people collecting more information.
And it's tagged properly too.
It's date timestamped and it's relevant, pulling all that together in these data
lakes.
And that's why I say you guys are building oceans here.
That to me is going to be so important, but it's going to absolutely need to
leverage AI as you know, I think that's some of the digital and text and there's
just so much upside to that in our opinion, but we're doing aural models, right?
We're interested in acoustics, gunshots, of course, metal banging, glass breaking, a whole
host of interesting or critical noises, but also of course, speech, you know, words and
phrases that might indicate duress, so that we might again have earlier detect in that
sequence.
So how can we do this and do it without putting an employee in potentially in
harm's way by very visually trying to activate some alert button, right? That it's more subtle,
but always with the human in loop. So Aural is important to us, scraping online messaging,
reposting, liking, all sorts of things that on their own may mean nothing, but could be a heads
up. So more indicators earlier in space and time
and things like that.
So I think AI is gonna be required to augment
all of our sensors in that way.
AI, I think on the body worn cameras
picking up behaviors earlier that the officer didn't notice
was looking over here, limitless.
And I know that's what you guys do.
So I think what you're pointing to also is the fact that
the world of sensors is, I would say, gotten a lot better.
The ecosystem of what you can capture, how you can capture it, and the accuracy
with which you can capture it has also expanded.
AI functionality on top of that has also expanded quite dramatically.
The ability to communicate, you mentioned your watch, wearable devices of all sorts
communicate more than just in the typical, hey, I'm going to push to talk
with the radio and say something.
A key part of that is this human in the loop where there's somebody there helping validate what these detections are bringing forward and also perhaps even triggering an action.
What is the role of a security operations center in all of this?
And how have you seen the world of security operation centers evolve over the past few years? And I would hazard a guess we've done some surveys that most of the retailers still don't have
a SOC or an EOC. What they might have is a room, you run in, flip the lights on,
plug your laptop in, have a whiteboard, a TV's in there. That's more frequent. There are some
that have 24-7. Some of the big, big guys, they have, you name it, happens every day.
So they become more sophisticated in that role.
But I think we know natural disasters and they're hit by everything from
snowstorms to floods, to hurricanes, tornadoes, looting, looting really got
serious in the 21 time period and has been flares up depending on what's going on.
And so they have, they're use them for that purpose for those types of mass
attacks as what we call them, or like where you see a big mass Rob where they all pull up and jump out and hit these places.
But I think that, that this is sort of what we're trying to do is understand
what are the threats, what sensors might be needed and get, and some sort of
roadmap to improve them as well as AI models, as well as the human in the loop.
And I have a, at UF physicians, scientists, colleagues, and those, the
radiologist and they're looking at slides images all day now they've trained human in the loop and I have at UF physicians, scientists, colleagues, and the radiologists.
Yeah.
And they're looking at slides, images all day.
Yeah.
Now they've trained so many of these things. Hey doc, hey, hey, check that out. Could be a lesion,
not doing anything else, right? It's not sitting in the patient's surgery or ordering tests.
Yeah.
The doc is still, man, yeah, let me check that or no, that's not. And I think that's where we're
heading pretty quickly. And so in the connect with us, I ask, we look at connect one, smart and connected
place, how do we help that place manager?
He or she have much more awareness and even understanding about what's going on
in the parking lot, in their location, what's coming their way, a storm, new
merchandise, whatever, a person connect to for us is between their enterprise places.
So store one, Hey store two, guess what?
They're headed your way.
Everybody. This is what's out there.
Your own hot list, they get it.
And then connect three is between.
So that's smart connected enterprise, smart and connected community with
law enforcement or other partners beyond you, it could be another retailer.
So we think with that hierarchy, leveraging the bow tie model and
then all the effective tech along that, that's kind of the framework.
Yeah, makes sense. When I've had conversations with various lust prevention professionals, time model and then all the effect detect along that, that's kind of the framework.
Yeah, makes sense.
When I've had conversations with various lust prevention professionals,
even security professionals, something they always tell me is they can spot
someone who's going to probably do a bad thing very early in the process.
And perhaps it's experience, perhaps it's just an understanding of certain cues.
Do you feel like that is something that can eventually be automated in some way?
Is there a risk to that?
Is it scientific in certain way, or is it something that is more intuition today?
It's probably partly intuition, but it's sort of like looking for somebody that
might be armed, a lot of times they touch to just reassure them or check it's there.
Right?
And they check things.
Going back in time, really 20 plus years ago. I can remember doing an article where I watched so many people
stealing in person and by video.
But you know, the, I, the, the way you stand, a person stands is normally a
little closer when you're stealing than if you're buying, so there may be a
proximity, maybe an angle we're trying to shield differences squared off.
You might see hands are down removing the packaging, the pricing, a security
tag, whatever versus up where you're, Hey, is this the price brand style?
Whatever I'm looking for your head's more oriented down.
You're checking out the merchandise versus looking for cameras or people
that might be watching you and so on.
And so you might just have eight different cues that in clusters.
Well, if they're standing in front of rugs that are rolled up, that would be
almost impossible to steal, that's going to de-prioritize them.
So maybe the more queues that are clustered and in certain contexts,
context might increase the priority that you might want to pay attention.
So I think that's something that's never been done to my knowledge.
And today, if a store figures out, hey, they're having loss of a particular type of product
they're selling and they're looking at video to retrospectively figure out, hey, can I
identify who did it?
Is that a skill and audit functionality that is in place today where people go back and
say, this could probably be the individual or set of individuals who did something to
steal?
I think it happens. Organized retail crime teams will do that. probably be the individual or set of individuals who did something to steal?
I think it happens.
Organized retail crime teams will do that.
Large stores that might have store detectives and maybe more than one or a manager even, they do some of that overwhelmingly in American retail.
No, I don't think there's a whole lot of that going on, but there is some of that
happening, but I think it's pretty scattered and unfortunately scarce.
What's the reason why that data doesn't get surfaced and shared for analysis that could help you prevent in the future?
I think for the longest time it was analog.
And just in, you know, if I went to a retailer and said, hey, could you give us some video?
They have to put out an all call to everybody. Even today with ethernet connections, digital video, they still by and large
don't have a central repository in the cloud or data lake, right?
They're going out, Hey, can you send me what you got on dealing this?
So I think that's part of it.
It's so, it's beyond silent.
And this is also a reason why I think some of the data you've
collected indicates that
there's an underreporting of incidents of theft, vandalism, shoplifting, etc.?
I think so.
I think even in a small card shop or whatever, it's difficult to know all the crime that's
taken place because people don't necessarily do analysis of what happened.
The AI is just not, the models aren't there yet to show a lot of theft because theft,
it can look 200 different ways, right?
So they're not leveraging AI, it's in pools everywhere.
And I don't think people are systematically looking
for that type of thing, other than store detectives
training people in the stores for danger reasons,
safety reasons, they don't want their employees
tackling people or affronting them necessarily.
So they don't know what's going on by and large.
If they do know, do they record it or not?
If they record it, does that get passed on?
If they do also record it, do they report that
to law enforcement as a crime?
Then does law enforcement respond to it or not?
How would they record that if they do and so on?
And then how do they code it? And I
know when you and I talked the other day, we talked about the difference between somebody using
instrumental violence to take something or it's just incidental violence during that and it's
coded as a theft rather than, well, that was a strong arm robbery, vice versa. Data is always
going to have errors. That's why we look for large sample sizes, right? It helps us reduce the
influence of error. So this seems like an area where, now when you think about those subtle cues, there's just isn't
enough data for us to think about, hey, AI could be trained to do this.
It's probably going to be too prone with errors and false alarms for it to be materially useful there.
Is there ways in which when say someone, a human figures out that, Hey, this is, this is a piece of video that is worth attention.
Are there elements of making that, that, that video easier to share because
you're able to redact it more effectively.
You're able to protect the privacy of those who perhaps are not
involved in that incident.
Are there elements like that, that would actually make
information sharing much easier?
Absolutely.
I think that is the key is look, most of the video is digital now.
It could be much more readily obtained and pulled together,
but you want video like anything of people that are
and are not stealing normally,
especially if we're talking about clusters of behaviors
to understand, at least in the first instance,
before we simulate or create synthetic data.
So I think it's all doable.
We're very up for working on it.
We've got to figure out ways to,
like we would say in statistics, bootstrap, right?
Take and create synthetic data,
probably in my non-engineering computer science mind.
Interesting, interesting.
So when we talk about employees in a retail store
or a quick serve restaurant or places like that.
Do you sense that they are more comfortable today with the presence of
video security devices, body worn cameras, etc.
Or do they feel like it's an infringement of their own privacy?
You know, it's interesting with $1 store, we went in, they wanted to do a test and we're still testing body-worn cameras with them.
But the first blush from the employees in the store were like, I don't want to wear that. I have to go to the bathroom.
I have private conversations or whatever. And then when it was explained, the use case with that company is, no, you activate when you feel under duress.
When you don't feel comfortable, you activate the camera. They all wanted it.
And then when the test was over, they didn't want to give the cameras back.
So we're seeing part of that is when people understand the context.
And one national apparel retailer, now they were doing a test and
their attorney was fantastic.
And one thing we're working on, something Clay and I were just discussing at
lunch was the barriers to protection is what I call barriers to adoption of the technology, right?
You've got privacy, legal, IT for security, or just they got too much on the roadmap.
Finance, where's the cost benefit?
And it go down this whole list.
And then what are playbooks we might generate to overcome those so you guys that have good
technology can be adapted, adjusted, tested, and deployed faster to get the place safety we want.
So in this case, what they did was they went through a lot of homework and they,
they weren't even sure they wanted to do it.
They trialed it.
Employees loved it.
They felt there's a measurable difference in confrontations and threats because
the other person realizes maybe it is civilizing a little bit, like you saw
particularly early with law enforcement body wearing camera research and it's
still showing benefits.
Now they roll it out, word spread.
Now they're nationwide because no, the employees, I don't, I'm not going to work
without my body wearing camera.
So from maybe going back to sort of that public safety and enterprise relationship,
when something bad happens and say 911 gets called or local police
department gets called.
What is the typical flow for a store manager or whoever makes that call today?
And what's good or bad about it as it stands today?
Well, the, the toughest part is if you're trying to detain one human, detain
another, I mean, um, and that can get rough.
And I've joked earlier about some of the scars I have.
And I mean, people, some people, okay, you got me.
And, uh, that's probably most people are, I don't like this, but you got me,
but you have those that you're not getting me.
And, um, and so it can get pretty rough and it's not like, Hey, Tina,
I need you to go stop those two guys filling up that garbage bag over there.
You know, it's the difficult thing to detain somebody.
The next part of this is now when you detain them, law enforcement
staffing levels are typically down now, as we all know, some areas it's coming
back, some that hasn't yet now you can readily transfer digital evidence.
I think that's going to be the future to relieve some of that.
So again, I think it's charting out and I like what you said, let's
chart out this process here.
What are pain points and opportunities?
And so similar to body one cameras, what do you think the role that
solutions like license plate recognition play on prevention?
Yeah.
So I think body one cameras, we know the benefits there.
It may actually civilize and reduce some of the escalation that could occur
or does occur as well as providing some powerful evidence.
But now I think when it comes to license plate readers,
it's really the same thing.
Now, part of this is do we or do we not want
the red actor to know about LPR presence?
Sensor survival is something that we're looking into.
We're looking at low visibility,
low signature for survivability.
I've talked to law enforcement people where,
and I just saw it personally the other day,
they're damaging or stealing LPRs, you know, things like that.
So we need to always think countermeasure, countermeasure,
countermeasure, survivability.
So I think low signature, disguise, things like that are going to be some
things we need to, and are starting to look at, but LPRs I think are powerful.
You, by and large, especially now with AI models getting better and better at the
vehicle characteristics in case the tags missing, altered, covered, whatever.
I think they're game changers in my opinion, like the body worn cameras.
They're just absolute game changers in knowing where people are in real time.
The other thing that I like about one of the products you guys have is reads,
because we want fixed LPR reads for hot lists, right?
That are immediate or over time, forensically.
But we also need to know where the red actors meet, sleep in store stage.
That's why putting them on other types of vehicles and knowing these things, I think that's big protection, not big brother, right?
And it's only used when there's been a crime. I think that those types of sensors are missing.
And you touched on this already in terms of it's big protection, not big brother.
in terms of its big protection, not Big Brother.
As communities see these technologies fielded, how can enterprises more effectively communicate
what they're doing, the nature of what they're trying
to prevent in a manner where communities
don't perceive this as being Big Brother?
I think again wins because we know,
we talk about red actor and green actor,
but we talk about red space and green space.
And I think silver alerts, obviously missing children, even if it's temporary.
That's where you see facial recognition or other bio recognition coming in to
say, Hey, this child, this elderly person, this vehicle, this vehicle tag or plate.
That here's where they are, or they were just here, or this is the pathway they
seem to be taking, you taking, those types of things.
So I think putting those wins out and understanding.
The other part is voice of the victim research, where the people that work and shop in these
places, their voice is heard.
And we know crime harm is a big part of what we study in it.
Crime harm can be like a pebble in a pond, it can ripple out.
And if you vicariously even experience,
much less personally experience,
somebody that's aggressive or it's persistent theft
that goes unsanctioned, there's no consequences,
that wears on normally socialized people.
That type of anti-social behavior
can mentally, cognitively affect people.
And if people are affected,
that affects their loved ones and so on, right?
If somebody's killed or injured and so on.
So there's this crime harm that spreads
and we have retailers we work with
that have closed one, two dozen stores
in some of these communities where there's a lot of crime,
not much response, and it's just people don't feel safe
and they're out, they close up and they...
So if we think about maybe us having this conversation
five years from now, what do you think would be the biggest technology areas you would be talking
about five years from now that we perhaps are not addressing today?
I think it's going to be just immediate communication.
We kind of started out talking about communication that connect peace,
but I think it's going to be earlier, better detection.
And I think we're going to get better at affecting offenders decisions.
You know, our goal is always not here, not now. Right now, we'd like to get better at affecting offenders' decisions.
Our goal is always not here, not now right now.
We like to change people's life course, but at least not here, not now, as far as victimizing
somebody else.
So I think Affect, Detect, and Connect, I think the tools, the effects of those tools
are going to increase, but the speed, the latency is going to continue to increase.
Any final thoughts?
I enjoy this conversation.
And I think that I applaud what you
your team and others are doing here at Motorola Solutions that you have the respect and credibility
and that's important. You have the resources but you seem to have the drive and the energy and so
I'm excited to be here and talk with you Mahesh and so many of the other people here at Motorola
Solutions. Well Dr. Hayes it was a privilege to talk to you today.
I'm glad we had a chance to do this in person at Summit,
but looking forward to having
another conversation with you soon.
Fantastic. Thank you.
I love it.
One of my key takeaways from this conversation
is that artificial intelligence is an absolute game changer
for revolutionizing public-private partnerships
and information sharing
in loss prevention and public safety.
There are three potential areas of impact.
First, significantly improved detection.
AI is absolutely required to augment sensors
by processing vast oceans of data from various sources.
For example, license plate recognition technology
can go beyond the plate to identify vehicle characteristics
even when tags are missing or altered.
Better audio models, including the ability
to detect stress in speech,
can enable earlier detection of criminal activity.
Second, enhanced public-private collaboration
and real-time information sharing.
AI is crucial for leveraging data lakes from diverse sources like law enforcement records
and retailer incident data, creating multi-layer dashboards for better coordination before,
during and after crime events.
This fosters trusted partners and conditional sharing of information.
Third, addressing challenges and future outlook, AI-facilitated sharing helps create big protection,
not big brother, by strategically using data for detection earlier in the zones of influence.
The historical lack of data has led to underreporting of incidents, and AI, including better redaction,
can make sharing video and other evidence much easier while protecting privacy.
Hope you enjoyed this conversation as much as I did.
Thank you.