LPRC - CrimeScience Episode 67 – VX: On the Frontier of the Value Exchange featuring Ron Worman (The Great Conversation)
Episode Date: July 12, 2021Over 150 companies are paying the LPRC and the university to benchmark these technologies and these emerging best practices so they can leverage technology to make better data driven decisions. We hav...e a great and inspiring conversation that leads us to believe that culturally crossing the chasm of our fear and constraints to a Value Exchange (VX) that values the safety and security of people, is rapidly becoming a reality. Ron Worman is the Founder and CEO of The Sage Group® and The Great Conversation. Over the last 30 years The Sage Group has been helping companies build and execute a ‘value’ strategy that leverages their internal and external relationships. The post CrimeScience Episode 67 – VX: On the Frontier of the Value Exchange featuring Ron Worman (The Great Conversation) appeared first on Loss Prevention Research Council.
Transcript
Discussion (0)
Welcome to the great conversation where ideas matter.
Ideas shape markets.
Ideas can change the world.
We have mentioned over and over again in many of our conversations with key leaders across different vertical markets, across different
disciplines that we're at this inflection point right now where technology is converging and
advancing and helping us make more than ever good data-driven decisions. And we're always
interested in thought leaders who are advancing that in their disciplines.
And so we were able to track down Dr. Reed Hayes, the director of the Loss Prevention Research
Council, as well as a research scientist at the University of Florida, to kind of get into his
journey in making better and more advanced data-driven decisions within his
purview. Dr. Hayes, great to have you on. Oh, wow. Thanks so much, Ron. This is exciting for me.
Well, as always, it's a great conversation, unscripted, but I would love, and I'm going to,
I asked permission before this broadcast to call him
Reed so I'm going to start it off by saying Reed I'm really fascinated in the journey where did
you start with your your insights into data-driven decisions and and take us all the way up to the
present if you don't mind sure absolutely I you. You know, I, as a youngster, sixth generation Floridian, which is pretty rare, we grew up hunting and fishing throughout Florida.
I wanted to be a wildlife officer, a game warden, if you will.
And so I had interest in the law enforcement part.
So during college, my first year as a freshman, as an 18 year old, I got a job as a store
detective, um, may making something like 425 an hour, um, catching shoplifters, which was
wild, um, woolly.
And the training was just a pamphlet, if you will.
Um, but I, but I stayed interested.
I got to go to the Orlando police Academy independently and get a couple of undercover
jobs, uh, sworn in as a deputy
sheriff or law enforcement officer in a little drug task force and do some crazy things. But I
got into loss prevention again after I got some of that out of my blood and went to a company,
Jay Byron's, it became Ross Stores, rose through the ranks there and was offered a directorship
and a contract to be the vice president. Again,
this is back in the day, in the 90s. But I saw all the time that a total and complete difference
between what we did in law enforcement as practitioners, what we did as practitioners
in loss prevention, now also referred to as asset protection, we just kind of did stuff.
A lot of what we did was
probably right or close. A lot was probably wrong and not even close. But my father and
grandfather being physicians, they seemed to have a little better grasp on things. And a lot of it
was their training, but the idea was evidence-based practice. And the idea that what the decisions
they made, whether they were diagnosing or they were treating, by and large, were science-based.
There was some logic, a logic model.
Hey, this domino seems to knock this one, knock this one.
So I need to, I'm now going to treat one or more of those dominoes, you know, those aiming points.
and then I'm going to get evidence from a lot of people around that that are conducting rigorous research whether it's observational or even better randomized controlled experiments or trials so
but I would segue back to what I was doing and I mean you know we had a pamphlet here and there
saying look for someone with a baggy coat and so on and we were doing the best we we could and we
still are today but I knew there was a lot lot missing, and that lot was probably science.
And a lot of what we now need to do is use the same type of sensors and other technologies
that our medical practitioner counterparts use to diagnose and treat.
That's just fascinating.
And if we look at it now, here you are at the University of Florida, you're running the Laws of Prevention Research Council, and we take that word sensor, devices that are aggregating bits and bytes.
How have you integrated technology into your science-based, your evidence-based workflow, if you will?
to your science-based, your evidence-based workflow, if you will. No, it's good. And we use, by the way, evidence-based or science-based somewhat interchangeably. I'm now grown to be a
little more of an advocate of science-based just because science denotes, again, there's some
logic. We're thinking this through and there's a model or a framework or something, some kind of
mechanisms that we think are related. And so we're going to devise
what we do to prevent or mitigate that or cover. And now we're going to test that rigorously.
That's where the evidence comes in, the observations. So science is logic and observations,
not just observations and not just logic models. But, but how do we incorporate it? Well, again,
I always go back and draw on all these journals, medical journals I would read, which is kind of
scary in hindsight that somebody at that age would do that. But I learned from there that that's how
they operated that, you know, their businesses or practice their business, if you will. And so
we do the same. So we look at what's a good day for a bad guy,
you know, whether that bad guy is somebody that's trying to, they're all trying to harm somebody
in the moment, or maybe it's premeditated, but they're trying to harm somebody. So that,
that bad guy, we refer to them as the red person stop versus the green, which is elicit or,
or, you know, the plate, the place user that we want, the employer, customer delivery or whatever.
What's a good day for that bad guy? If they're a porch pirate, if they're an armed robber,
if they're somebody that's there to intimidate somebody or to steal or commit fraud, regardless.
And all those are very different. Even burglars that enter through the roof are a lot different
than somebody throws a brick through the window or hides behind and so on. So we'll draw out all the steps or stages that we believe that a good, a bad guy needs for a good day. Those are aiming points. Left of bang, you know, before the event, the event, and right of bang. Now let's identify each step. Those are aiming points. And so you mentioned sensors and technology. Well, we need
to know something. We need to know something they're saying or doing, their movements and so
on, who they're communicating with and so on. So that's where we place our sensors, right? We're
trying to learn what sensors we need, aural, digital, visual, or combinations, of course, where we would put them, how we would use them, but what are the data they can gather for us? And then how can we use AI in some form to pull meaningful, actionable signals from that and refine what we're going to do to deter, you know, to deter, disrupt, or document at each
ending point. What I love about this story, appreciated, is your science-based process is
really a logic model. And what you did, what's fascinating, is what you did is said, we're going
to have a logic model. We're going to have a science model, a framework in which to do these things, what we do,
but then we're going to apply that same science model to the bad guy. We're going to get in that
person's head, and then we're going to align those models and therefore create new and innovative
new ways of seeing the data and preparing for a right and left to bank. Did I get that right?
I think you nailed it, Ron. And we all draw on our experiences. And so I go back to when I was
an 18-year-old catching shoplifters, and I still have physical scars on my hands and arms. But
I always wanted, why are you doing this? Why aren't you doing that? Or why are you doing it
this way? And then when I was undercover, talk about needing to get into somebody's head to stay alive,
much less to kind of get an idea, map it, gather evidence and do all this, you know,
why are you right in the midst of these people that are pretty scary. And I learned how difficult
it is for any of these people. And I started wondering, why are these people not in the least
deterred or disrupted in what they're doing?
That's the problem.
That's what my whole job then and now over the last 20 years in the research area is just what you said.
How do I get in their head?
But what I'm trying to do is help them make better choices.
And I can't do that by shaping their genetics.
I can't do that by affecting their genetics. I can't do that by, you know, affecting their home life or their peer groups, but I can
do that by shaping the environment that they come into.
Shaping the environment and in a sense, behavioral modification.
That's our goal and we're getting better at it, but there are components there that I
can talk about.
Yeah. Well, you mentioned something and this goes back to a working group,
an industry working group I'm sitting on right now. So it's kind of front of mind.
I've been a student for a long time of neuro-linguistic programming, and that's the
idea of recognizing different physical behaviors, if you will, kind of data sensors, watching a person's eyes, looking at their face, how they use their body to understand how they're consuming information.
And I've always said, you know, in the best of all possible worlds, we will begin to adapt machines, if you will, systems, technology
solutions that can do the same thing. And you mentioned something really interesting,
sensors picking up what they're doing, oral, visual, digital. So in this working group,
it's called intelligent communications. Where are we taking the various technologies that are in and around audio and voice recognition?
What are we doing today to start deploying that within the security program?
So what are you learning?
I'm just kind of intrigued.
What are you learning about the oral, visual, and digital state of technology today and where you think
it might be going? Big loaded curveball question. That's right. No, but it's a good one. And
I think the first, you know what, the first response I've had, and I still do a little bit,
is, wow, I thought we were farther along than this. And so in November of 2020, nobody's ever going to forget 2020. But in my
case, I got moved from the College of Liberal Arts and Sciences, where you would naturally
suspect somebody that was a social behavioral scientist would be over into the Wertham College
of Business at UF and a top ranked team. And the reason was to pair, hey, let's pair this guy that
kind of tries to understand why people do and don't do things or do it this way with all these people trying to understand technologies.
And UF were very blessed.
NVIDIA came up with the largest investment in artificial intelligence in any academic institution in the world.
So they have delivered a $70 million package to UF here where we are. And that means
we've got the most powerful AI compute of any academic institution anywhere on the globe.
It's up and running. HyperGator 3.0 is the massive, massive compute system that we've got here,
now layered in with big, big GPU packages from NVIDIA and others. And so they additionally are hiring, and this is wild in this
day and time, a hundred incremental faculty here that are AI people. And they're putting them in
every department across campus here. And we're one of the only campuses in the US where everything
is in one place. So you can imagine the multidisciplinary opportunities we have for
research and development. And so we're taking advantage of that.
We are, we have every, my team, we have every other week calls with NVIDIA.
They curate those calls, whether we're talking at the macro scale, the smart city scale,
we're looking at the meso, that transition, or work at the micro, that parking lot and
that interior space combination there.
And so, all right, what sensors do we need?
How do we put them out there?
What are we trying to pick up?
What's going to make sense so that we can better safeguard initially
and provide a much better user experience for everybody that we want there?
Boy, I bet most of the world does not know of this $70 million investment,
100 faculty, even more impressive, integrating that
faculty in a cross-multidisciplinary way, because that then avoids the silos of excellence we
usually see in academia. Very impressive. Who came up with that approach and that idea?
You know, I'm not exactly sure, and you can imagine I don't want to get in trouble.
But I understand that our president, the provost, who is, you know, the chief academic and research officer, UF research and on and on.
But also our dean of the College of Engineering, Cammie, she has been very, very, very focused and knows how to bring together the right resources and aim them.
And she was one of the first looking at cybersecurity, got a huge, huge multimillion
dollar investment from the state of Florida to build a very powerful, what's called FIX,
but it's the cybersecurity research team here. Not my area of expertise, even though I work with
those guys. So that's where it came from and where it's going.
It's amazing.
They're building an entire new data science building.
You know, it's this very comprehensive package.
We're also communicating or coordinating with the community colleges and either and other
Florida universities and then even K through 12.
And finally, the historic black colleges throughout the Southeast.
So it's not just a UF. We really want to bring
everybody in and move at light speed, but and do good things.
Such a gap in performance of academia from a great school on truly useful skills in the
marketplace. So if you're successful, you'll be, like I said, at the opening, you'll be changing the world. So congratulations. Tell me about this project that you posted on LinkedIn
that you're doing today as kind of an example of almost like a story behind this huge thing
you're part of. Tell me about the tower that you've constructed. What is it and how do you
see it being used? Sure. So our team's mixed method
research. In other words, we do a lot of deep dive interviews with offenders in the wild, right? Out
there in stores, parking lots, wherever, where they fence and so on. And we do this a lot and
done it with robbers and all. We do big data dives, right? We'll get, let's say the three major drugs
chains in America that provide us all their data around all their robbery events for, say, the last 36 months and look for aiming points.
And then, though, what we do is now let's come up with much more focused, precise solutions
and solution sets at each of those aiming points that we've identified.
Well, some of these things, they're not ready for prime time, right?
So we actually need some lab spaces. So I've got five physical labs, one that looks like a very cool tricked out store.
We've got over 130 different security technologies in there from over 70 different technology
companies.
The guys you know out there, the Bosches and JCI, Centromatics and Checkpoints and on and
on.
But we also, because we're in the real world, we want to have that transition from interior.
I even have a mocked up parking lot inside where we can fail fast and forward and all these terms you hear and learn how to integrate.
And then we move out to the parking lots. And so UF Research there, they were talked into.
research there. They were talked into, you know, they allow me to use this entire block called Innovation Square. That's about two blocks from main campus or 2000 acre campus. That's my lab,
that whole block, including the huge incubator building we're in for startups. And so I use this
parking lot that's by and large abandoned. And I have that thing with all types of sensor
technologies to protect each, the sensors first, each other. And then have that thing with all types of sensor technologies to protect each,
the sensors first, each other. And then I've got three of those towers you're referring to from
live view. They have everything from LIDAR and radar sensors to oral sensors. And of course,
all types of day, night video that we can do different interesting things with. But it's all
there to safeguard the place users and then form those. So we use them as
research platforms to learn more about normal versus anomalous behavior. What are two or three
or four things we want those cluster of cues instead of acting on one. And then we also use
them as deterrents or countermeasures. And they were deployed, the enhanced versions during the
looting and other pretty wild events.
But Walmart, Kroger, and other major chains use them routinely now in different areas
to safeguard their customers and associates.
Wow.
So because inherently loss prevention is associated a lot with retail, Are the big retail companies also helping to fund this or participating in
the results of the research? They really are. And so the LPRC is a community of over 150
corporations, right? So we've got 68 major retail chains, the big guys, right? The Bloomies and
Macy's and Nordstrom's and to the AutoZones and Family Dollars and, you know, CVS, Best Buy, all those guys. So we have 68 chains. Their vice presidents
are the members, but their teams in the corporate office and in the field work year round. I have
three other criminologists on my team, three research scientists. They facilitate seven
working groups that they meet at violent crime Crime and product protection through data analytics, through innovation.
And they interact throughout the year with our scientists and each other.
They come up with new projects.
We've got supply chain protection and so forth.
So that's how they interact.
That's how they fund.
They each pay a certain amount every 12 months, all 150, 60 corporations. And
that all goes into the kitty. That's what we use to build and operate the labs and have a
world-class team. Very, very good. Let's archive this podcast at some point and we'll revisit it
in five years. And we'll be having a different kind of conversation there. Let's do a little predictive. It's always dangerous, but in five years, how will the world change because of your
contribution at the research center at the UF? How will the world change? You know, I think one
point that's going to be interesting is we all have, everybody's got an opinion, how well informed
it is or how logical it is, is always open. But, and so I think let's
look at some of these technologies, AI, for example, whether, and so if we're trying to
understand what if somebody is coming our way, so the, our starting principle here, and this is why
I'm hoping to change our starting principle, where we begin when it's ethical, equitable,
and so on that's out there and, and privacy and these things is our first mission, our obligation.
It's the most moral, ethical, and equitable thing we can do is to safeguard those people that are there using that place.
And so what if somebody posts and they've got a handgun and they say, I'm headed to XYZ location.
And we don't just work with retailers, by the way, but I'm heading to X store
to settle up. Now we have a heads up, but if we're not out there trying to understand,
and we don't have crawlers or some sensor way to know that, and then maybe define possibly the
validity of it, we can have a real problem, right? I mean, deadly problem. And further though,
what if that individual's image we can pick up, they're coming in a parking lot.
And so that manager has a heads up. They can decide what they're going to do or not do.
But they have a heads up. They could lock the doors. They could abandon the place.
They could get law enforcement help. But if they don't know, because somebody somewhere is concerned about privacy.
And so we call all this value exchange VX. Right. It's not a new term.
And so we call all this value exchange, VX, right? It's not a new term, but the idea is all day, every day, every one of us exchange some privacy to get convenience or entertainment, or in this case, safety and security. We give it up. We give it up all the time. And so our headwinds right now, and always will be, but maybe in five years won't be as serious, are, well, what about privacy? What about big brother? Again, our contention is I think we
can do, and I know we can do these things where we can better safeguard and understand value
exchanges. And if people better understand you're going to be safer or your loved one that is there
visiting, shopping there or working there. It's amazing when we do those kinds of real world
surveys, people are like, I'm all behind that. But that could be a privacy.
That's their problem.
My concern is my loved one.
Very interesting.
So we, like every other technology advancement known to man,
at some point we cross the chasm
and we learn to trust it for things that safeguard us,
give us more convenience,
give us more performance in our company.
That's very good. So let's call that out for a second, because you've got towers on the campus.
You've called them innovation zones, I believe. And so when the students and the faculty come
into these zones, they know what you're experimenting with. It's right out there in
front. What is the reaction to the prevailing culture right now based on what
you're doing? You know, it's really interesting. There are those that are kind of against
everything. And so they might have a negative comment. And we've seen a couple, not as many
as we might think. I've grown cynical in my elderly phase here of life. But you notice,
and we map this, right? We have geospatial statistics. We can see parking patterns.
Guess what?
Some of those same people park next to that tower.
They don't want their vehicle to be messed with or themselves or their loved one that
might be with them.
So that's how we do it.
We go with empirical evidence, right?
And so if we launch our drone and take pictures of parking lots, not just our test one, but
in real store environments, we move that tower to different
places. At night, we see clusters. It's not the most convenient place to park. They start moving
to the safest place to park. Beautiful, beautiful. As I used to say around parenting and also about
process improvement, don't believe what people say. Study their behaviors. They'll tell you what
they need. There you go. Couldn't say it better. Very good. This has been a great conversation with
Dr. Reed Hayes. Reed, should we be putting in the description of the podcast links to the
Research Center? What would be the most appropriate place for people to
go to learn more? Right. I think, yes, it's lpresearch.org is the best place to go. It's a
very comprehensive site. You could always, of course, Google Reed Hayes or LPRC or anything
else, but that's probably the most comprehensive site. And what I didn't mention in my opening remarks about Reid is he's also a prolific author in the less prevention space. Reid,
this has been a great conversation. Thank you so much. My pleasure. I loved it, Ron.