LPRC - Episode 10 – Emerging Relationship Between Academic Research And Retail
Episode Date: July 26, 2018The post Episode 10 – Emerging Relationship Between Academic Research And Retail appeared first on Loss Prevention Research Council....
Transcript
Discussion (0)
Hi everyone, welcome back to Crime Science.
In this podcast, we aim to explore the science of crime prevention and the practical application
of this science for loss prevention and asset protection practitioners as well as other
professionals.
Co-host Dr. Reed Hayes of the Loss Prevention Research Council and Tom Meehan of ControlTech
discuss a wide range of topics with industry experts, thought leaders, solution providers,
and many more.
On today's episode, we have co-host Dr. Reed Hayes of the Loss Prevention Research Council
and Tom Meehan of ControlTech discussing academic research and retail with LPRC research scientist
Dr. Stuart Strom.
We would like to thank Bosch for making this episode possible.
Be a leader in loss prevention by implementing integrated solutions that enhance safety,
reduce shrink, and help to improve merchandising, operations, and customer service.
Bosch integrated security and communication solutions span zones one through four in the reduce shrink, and help to improve merchandising, operations, and customer service.
Bosch integrated security and communication solutions span zones one through four in the LPRC's zones of influence, while enriching the customer experience and delivering valuable
data to help increase retail profitability.
Learn more by visiting Bosch online at boschsecurity.com.
All right, well, welcome everybody to another episode of Crime Science.
This will be actually our 10th episode.
We're excited about it.
And I think most of you know that what we do at the Loss Prevention Research Council
and our team at the University of Florida, of course, is conduct research.
But why and how and what does that mean?
What's that look like?
And I know that some of you out there have been asking, you know, why research or what does that look like?
Why not? You know, what are the limitations?
So today what we're going to do is have one of our team at the LPRC join us again.
But Dr. Stuart Strom, who is on our research team here at LPRC.
LPRC. And we're going to, myself and my co-host Tom Meehan, welcome Tom, are going to spend a little quality time here with Stuart and try and understand a little bit more about the why,
the how, the why not, and the limitations of research. Tom, I'll go over to you.
Any opening comments or questions you'd like to start with today?
Well, thank you, Reid. It's really exciting to be here on the 10th episode
and to have Stuart here.
And I work pretty heavily with Stuart,
so this is exciting for me.
And I think what I'd like to just talk about
is the relationship between retail
and the academic research,
because I can recall vividly 10 years ago,
and even though it was 10 years ago, thinking what exactly
is the loss prevention research council? What does that mean? What does it do for us? So Stu,
I'll throw that question out to you. You know, what's the relationship between retail
and academic research? Yeah, thanks, Tom. I think the state of the field right now when it comes to that relationship, it's better and it's getting better.
The past 15 years, we've seen this proliferation of academic centers that focus more on practical domains such as retail.
You know, universities are moving towards a model where they provide more usable
skills to students. And that means, well, that means really two things. Number one, focusing a
lot more on applied research, right? So applied versus theoretical research. Applied is using
theories and actually taking those theories and showing how they could be, you know, practically
deployed in the real world, I suppose. So there's been that movement. And there's also been this
movement towards a lot of, I want to say, interdisciplinary centers where you take
academics from several fields, you know, from STEM fields, mathematics, engineering,
from social sciences fields, from humanities, and combine them with people that aren't necessarily
in academia and see sort of what comes out of it, right? What work can be done. So there's a lot
more movement towards that recently. And I think that that is sort of the genesis of
LPRC. And it's also the genesis of things like, you know, the the work that's being done in Auburn
with RFID tagging and a lot of the really groundbreaking research being done there.
That's also the genesis of things like the UF Retailing Center. The idea, again, is to just make sure that when students come out of college,
they have a familiarity with sort of the practical skills that will be necessary to really succeed in business
or really any other domain they choose to go into.
I would say, too, to build on that, Stuart, in that I can remember years ago after September 11th
that several universities were contacted by what later would become the Department of Homeland
Security and asked to put together coalitions and interdisciplinary teams. And I was asked to
come to a meeting because we interviewed offenders systematically and could what we do apply to terrorists? But in this room,
and it was a really neat meeting, we had engineers in there. We had building construction people
that were looking at blast-proof materials and things like that. We had biologists in there
looking at, obviously, those bio-threats, and psychologists in there that looked at brain
imaging, and anthropologists that understood cultures and
and of course as a criminologist but anyway to name a few so I just thought I'd build on that
and the idea that you know we're really here to solve problems as well as to educate provide
jobs and capabilities to students as they graduate at the university so Tom what are
your thoughts on the initial look-see at what is
research supposed to be and how does it help us get better? So Reid, I think you actually led into
kind of my next question with both you and Stuart and the Loss Prevention Research Council, you know,
kind of tying a few of, you talked about September 11th, but what are some of the other
academic research projects
that you've done that relate to retail? I know that's a loaded question because there's thousands
of them, but if you could name the top five or 10, because a lot of the listeners are going to
be listening for the first time and don't really understand the relationship between academia and
retail. At the face value, I think everybody that's involved in the LPRC directly gets it.
But what we're finding based on feedback
is a lot of folks that are listening
find us through the title of crime science
and don't really relate how that runs through.
So maybe you mentioned that 9-11 one.
Maybe you can name, you know,
five or 10 just off the top of your head studies
and how it related to retail. Yeah. So, I mean, I think it just off the top of your head studies and how it related to retail?
Yeah. So, I mean, I think it's best to sort of split them up into sort of different general
domains. So LPRC, we do obviously a lot of research into how to use new technology,
things like tagging, RFID. We do studies on PVMs. But if I had to categorize it, I don't know if it's best to say
like the top 10, but we can go into a little more detail later about things like, you know,
individual studies. But in general, I think a lot of it is, number one, sort of the basic
randomized control trial utilization studies. Like, hey, we're going to take this this new product protection
wrap and we're going to randomly assign it to 50 stores and we're going to use you know 50 other
stores as control um so there's a lot of that um and and seeing what the effect on sales and shrink
uh what those are but we also do um, for example, one study we're doing now
relates to more of the human behavioral aspect, right? So one of the sort of more recent studies
in psychology that we found interesting is this guy Dan Ariely has a theory of self-maintenance,
of sort of self-identity maintenance.
That sounds a little bit abstract, but I could sort of break it down for you.
The question is, what allows honest people to act dishonestly?
And his idea is, otherwise honest people will act dishonest up to a point.
That point is where they can no longer maintain this concept of themselves as
good people. It's called the concept of self-maintenance, right? So what does this have
to do with retailing? Well, I mean, think about it this way. Your self-checkout, right?
As we all know, with self-checkout, there has been an increased amount of, you could say, you know, theft, small-time theft, stuff like that.
And really, it gives incentives for otherwise honest customers to act dishonestly, right?
You may accidentally, quote, unquote, ring up that really expensive, you know, pound of organic strawberries as bananas, right?
Well, the question is, why do people do that? And how do you get them to stop? Well, if you take Dan Ariely's theory, they can do
that because they can maintain a concept of themselves as a good person, right? Well, you know,
it's only a little bit. I'm not stealing that much. Hey, I mean, it's all fruit anyhow, right?
Well, you know, this retailer makes billions of dollars. Why, I mean, it's all fruit anyhow, right? Well, you know, this retailer
makes billions of dollars. Why, you know, why shouldn't I take from them a little bit, right?
So if you can disrupt that, perhaps you can stop that sort of theft, right? For example, if you put
a signage that said, you know, theft is theft, it doesn't matter. I'm just spitballing here. It doesn't matter what kind of theft. Right.
Or if you put up, you know, some sort of indication that what they're doing is really a crime.
It's not accidental. Well, perhaps you can disrupt that behavior.
And we wouldn't have known that if it hadn't been for this sort of seminal study by this guy, Dan Ariely, which initially had
nothing to do with retail, right? You're just utilizing this theory that someone came up with
for a completely different field. So that was what I was saying. Human behavior is sort of the second,
you know, general bucket that we deal with. And a lot of what, you know, what we do also deals with stuff like we do time and motion studies, right?
If you have a product or a technology that's going to be effective in stopping theft or fraud or violence, it's only going to be successful if it's affordable.
It's only going to be successful if your organization is able to maintain bottom line.
So that's the other aspect.
And it's, I think, far too often ignored, right?
So I would say it falls into those three buckets.
But I know Reid's going to have some additional ones.
He's been here for much longer than I, so.
Well, no, I think that was great
because what Stuart did was a fantastic job, I think,
in addressing sort of what you're laying out there, Tom.
And what's this,
why do we use theory? What is theory? It sounds scary, you know, it sounds very abstract, it
sounds very academic and pretty unusable. But at the end of the day, what theory is supposed to be,
in fact, is a very practical explanation of how something works. And that's
the study, the saying, excuse me, that you hear in academia is there should be nothing more practical
than a good theory. So the theory lays out these mechanisms. Well, this domino starts tipping over
because of this, and then that hits this domino, so on but that's all put together by uh it's
constructed by logic we think this is what's going on and then informed by observation and observation
should be systematic it should be by many people looking at the issue in many different ways over
time and science is iterative we're all building on each other but you don't want to go down the
wrong path either.
And you see that a lot with theory where people take off and for 20 years they're running in a direction.
And then somebody realizes, uh-oh, I'm not sure this is the right direction.
But in our field, that's what Stuart's talking about.
Dan is going to pull together and make through observation and building on others' observations and theory.
This is kind of how we act and why.
And so then we're going to pull from that and say, all right, how do we use that in the real world?
How do we make sense of that?
But how do we mostly influence behavior?
Because we understand at a deeper and slightly broader level what's going on when somebody who's thinking about
stealing before they come into place, premeditated, confronts a self-checkout, or somebody that
didn't think about that, but now it's entered their mind, how do we persuade either or both
of those people? And what does that look like? What are the dynamics in that person's brain and background and what we can do and what we call the foreground what they're
dealing with the environment to change the equation a little bit to convince them let's make
better choices so um but as far as research goes we've i think we have conducted over 300 projects
maybe more than that because we're working on 60 plus mike on our team and
force me 67 to be exact right now um but i i think the uh the enhanced public view monitors tom uh
are probably our biggest win and contribution as a team and that you go back and i think it was now
And I think it was now 2006 or so in our CVS store lab, we put one in.
We thought, well, you know, I think monitors where the individual can see themselves in live on clear color,
high res might have an impact on their behavior rather than a blurry TV set at the entryway.
How do we make this thing really work to affect behavior in a small area, what we now call zone two, you know, that immediate area around something we're trying
to protect. So, Tom, that's kind of a very long answer to your short question here, but we've got
many, many wins in doing what Stuart described, and that is trying to draw on what many, many, many brilliant people have proposed and informed through research.
And then try to apply that and adjust that and make our own observations around that.
Well, I think those answers really helped.
And I'll just relate kind of the EPVM because that's one that I can relate to personally.
I can relate to personally and remember vividly over the years putting in an upscale retailer APVMs in is a challenge to begin with so using the research and the data to support why it was
important and starting in the ceiling and then ending up at eye level with a smaller monitor
in areas of the store and really actually physically getting to live the results where
we put things in and saw not only the shrink number go down, but the actual activity change.
So I can tell you that in certain markets in South Florida, actually, I recall vividly putting
EPBMs outside of fitting room entrances. So as you're walking into the main entrance at eye level,
and within a few hours, you could actually see the activity shift to where the monitors weren't
in real time. And again, I can also remember sitting in a boardroom saying, this is not just
an idea that I came up with, and here's some facts to back it. So I think that that's my favorite example, too, because I can relate to it personally.
I have a question really for both of you, because I think we led into this a little bit.
But breaking it down a little bit further, you know, how can academic research help retailers, in your opinion?
in your opinion? Yeah, well, I think sort of as I mentioned before, really we're getting into this. I think now the last five years, a lot of great academic research has sort of bloomed in the field
of, you know, crime prevention and retail environments. And what we're getting a lot is
we're forming, I want to say, a distinct discipline, right?
Or sort of a distinct research agenda, I should say, where you're actually now seeing systematic meta-studies of things like EAS tagging.
So Sidebottom, Aidan Sidebottom is one of the scholars that's working in the UK to do systematic meta-studies of things like tagging.
Because, you know, as Reid, as you know,
and Tom, I'm sure you know,
one study does not encompass everything, right?
There's error.
You know, you could have a study
that you get positive results
or significant statistical results,
and those could always be an error.
You always see, you know, something called a p-value, right? And basically what the p-value says is like,
if you, let's just say there's your null hypothesis, which is there's no effect, that
EAS tags have absolutely no effect on theft. And you end up doing a study and you find that there is a decrease in theft
because of EAS tags. And you'll see what's called the size of that effect. Let's say there's a 20%
decrease. And next to it, you'll see this little thing with a P next to it. It's called the P value.
The idea behind a P value is essentially, let's just say you uh you uh replicated this study a hundred times
right and you found the results were positive right five of those times even though you found
positive results it is actually truthful that there aren't those positive results.
You got those results in error.
That's a p-value.
It's probably a much better way to explain it, and I apologize.
But the idea is basically a p-value tells you your chance of finding those results in error.
So what that means is that the more studies you do, the more consistent results you find,
the more you can be assured that these tags work, for example, or PVM, same thing. That's why we,
at LPRC, we didn't stick with one PVM study. You know, we had to do multiple ones. We had to sort of validate the results that we have. So all this is to essentially say the way it could help retailers is persistent and consistent and repeated studies of these different technologies in different environments is going to validate, you know, hypotheses on what works and what doesn't
work. And it's also going to allow us to distinguish how they work in different environments. So how
PVMs work in smaller environments, how they work in larger environments, how they work in busier
environments, how they work in less busy environments. All these are questions that we can find better answers to through repeated study.
And that's how I would say research can help retailers.
And that's why it's really important, you know, the LPRC, we say it's a research community.
And the reason we say it's a research community is we can't do these studies alone.
You know, we need help from retailers.
We need access.
We need to know the problems that you all are facing, right?
As Reid mentioned, it's an iterative process.
You know, every new study, the results are going to lead to more questions.
questions. And it's our job to sort of identify, you know, theories that are going to address those new research questions, identify hypotheses, and test those hypotheses. So that's my answer.
So to, I guess, to quickly build, we do try to minimize error in our studies, and we want to make things accurate. We understand that
we're dealing with practitioners that are dealing with the real world with serious,
serious problems in violence, in theft, and in fraud, and that you don't want to offend people
or run off the good person, like in medicine. We don't want to to do harm so that's all part of the reason that
you want a little more rigorous study and you have side effects if you do something you're probably
going to knock one domino over with the one you're trying to knock over or using to knock it over
with but you might knock over some things you don't want to and we we see that all the time
with where somebody tries something and it doesn't result in what they want.
In fact, it could even have what we call a backfire effect.
And there's some good criminological research out there on that issue where they will try different policing methods in a very concentrated area.
And it's not just what you do.
It really is how you do it.
And so if you do something one way that sounds good, you might arrest more people and you might reduce crime. But if you don't handle it properly,
you might get more blowback and that people start to generate resentment. They don't cooperate.
And then now you start seeing a downturn in effectiveness. And so, great, wait a minute,
how do we make that work better? And the same thing with PBMs. They need to be at a certain height to get your attention. They need to have lighting on them, make sounds to get your attention.
Under our see, get, fear concept, you know, they need to be clear about what they are.
But they can't be offensive. They're not deployed in a way that a good customer doesn't like it or responds poorly to it.
The customer doesn't like it or responds poorly to it, but it can't look silly or be invisible, or a criminal is not going to notice it and respond or notices it, but it's not credible.
So, Tom, any follow-up questions or what?
I'll go over to you.
We have our dialogue here with Stuart.
Yeah, actually, I guess one of the things I recognized when I was in retail is that as an asset protection professional, we're really good at fixing things.
But we also tended to throw a million different solutions at a problem.
So if we had a theft problem, we led to technology first and said, what's the latest and greatest technology?
And then really threw a bucket at it and almost always were able to
solve the problem. But, you know, we weren't necessarily able to tell which of the solutions
or behavioral changes we made impacted it. So you guys are the experts here. And this is one
of the things that helped me the most when I was sitting on the other side and figuring out what
was really there.
What are some things that you can suggest to retailers
through the things that you've learned to help avoid,
one, going to just immediately to the latest and greatest technology,
and two, throwing five things at something,
and then at the end of the year going,
we solved the problem, but we're not really sure what happened.
So I have a one-word answer with a long follow-up.
The one-word answer is think.
Think through the process of what you're sort of deploying in your stores.
Think through the process of what is it supposed to do?
What's the method of action?
What's the mode of action?
What are potential offenders supposed to see? What are they supposed to feel? What's supposed to deter them? Why is it going to
deter them? Are there ways to overcome this technology before it's even deployed? So that's
the first thing I would say. So sort of create this model in your head about how it's supposed to work. Number two, don't underestimate offenders.
I think a lot of people in this industry think there's like, you know,
this sort of we're smart, you're dumb kind of thing.
I don't think that's true.
I think a lot of, you know, offenders like loss prevention professionals communicate. So if they,
if some offender has found a way to overcome this technology, you can bet they're talking
to their friends. You can bet it's somewhere on an online forum. So before you sort of hop to the
newest technology, number one, think through the process. Number two, recognize that the population that you
are trying to deter, it may, they're sometimes, you know, very, they know what they're doing,
I guess you could say. And then I also think that it's helpful to sort of reassess what we already
have sometimes in our stores, or what you all have in your stores
and see how to best utilize what you already have there.
See, is it deployed correctly?
Are there better ways to deploy it?
Is there sort of a bottleneck in process
that's making something that should work not work?
So it's just sort of thinking through
a lot of what retailers do.
And then I think finally just realizing
that there's probably no magic bullet.
You could have the coolest,
most sophisticated technology.
Humans are going to find a way.
And that's the thing.
So I'll read.
What do you think?
No, I agree with all the thing. So, Reid, what do you think? No, I agree with all of the above.
We've got to deal with something in the real world.
I think this is a problem.
I'm going to understand the problem.
All right.
Next, because of my Sarah process, I've scanned, found an issue.
We've got breach burglaries.
People are boring through the side of the store with a sledgehammer or an electrical tool, a breaker.
And so I've got a specific issue.
Now I'm going to say, well, where is it patterned?
Where is it happening spatially, you know, geospatially?
What time of day?
So we call it spatial temporal patterning.
Where is it happening and when?
Now, who's doing it?
And we kind of got an idea of how.
Maybe we've got video footage.
We've got a map showing a cluster of issues here. All right. So I kind of understand how and where and when. I may or may not understand why. But I think, oh, they're coming in after Apple products. Okay.
the Apple products are. So they've had to have some pre-event indicators they're generating.
Somebody had to know where it is. They either work there, they're colluding with somebody that worked there, or they just walked in the store and took a detour back in the back room or whatever's
going on here. Now, as Stuart says, I'm starting to understand what the problem is. That's my A
and Sarah analysis. I've kind of defined it. Now I'm going to go to response and R, but the response now is
going to be more targeted. I'm not going to just go everywhere and do everything and confound,
as you talked about before, Tom, where I'm going to throw a hundred things at it and confound.
And if something, if the problem goes away or goes down, I don't really know why, because I did a lot
of things instead of doing one thing or understanding how each thing might work and did it work and how did it work differently.
So anyway, now I'm going to deploy a more focused response to place and time and what I do
because it's now going to make it harder to do that.
I'm going to do this or I'm going to do that.
I'm going to have earlier detection of a person standing outside my building.
Maybe it's going to alert and flash lights and do things so they
don't want to be there anymore or make it harder or less rewarding. I'm going to move the Apple or
something. So that's an example of using research to solve a problem. And now we're going to do it
systematically. But sampling, measurement, and analysis are our big three tools here.
And as Stuart says, we don't always get them right.
We want to be representative, so we've got to sample the best we can.
It may be purposeful or the best, most convenient way to do it.
That's not ideal.
We want it to be probabilistic.
In other words, I think this sample of blood represents the blood in this human,
and I'm trying to see if they have a disease or not, uh, state. So the same thing we've got to deal with in, in research for you
guys in the field. Um, so from a practitioner standpoint, what, any follow-up questions there,
Tom? Cause this is good, I think always to combine practitioners with scientists. Um,
I have the luxury of being, have lived in both worlds,
but I haven't lived in the practitioner side in a couple of decades. So back to you, Tom.
No, I think that that really helps, especially when you say thank you and you break it down that way and really run through. I know that I only have a couple of questions left, but
just want to basically a clarifying question. When you talk about time in motion,
I think that's probably one of the things that for me, at least in, in all, in all of my career,
including outside of retail, we're really talking about scalability. Am I correct when I say that?
Like, can we put this in multiple locations and can we make it the ROI work?
Yeah, that's it. That's a huge, that's a huge thing. Yeah. Oh yeah. That Yeah, that's a huge thing.
Oh, yeah, that's a huge thing.
And it's just things that you don't think of, right?
How many seconds is it going to take someone to put this tag on?
Or how many seconds is it going to take to take it off, right?
That's a cost.
That's labor cost, right?
If a tag takes 40 seconds to put on and you have to tag,
I don't know, 400 items, I mean, that's your day, right? That's it. That's like,
that's most of your day. And that means you're not going to be doing other things, right?
So simple things like that, that, you know, you don't think about it may be a great solution,
but if it's tough to get off or if it has a high failure rate, right? If it's just not
practical to do, or if it takes too much training to get someone to do it. So these are, again,
things that you think through. Reid, I'll turn it over to you. No, no, I think this is really good
discussion. And to Tom's point, and that is, what you're describing is this interface, again, of the
real world and research
and how they're both one and the same, hopefully, if you do it right, and that by collaborating.
And you mentioned earlier LPRC is a research and results community, but we need the community.
We need the practitioners, the experts, the domain experts in that field,
the people that are out there day in and day out grinding, trying to reduce crime and loss in and around their places.
And we want to make it practical.
So we're not just going to say, okay, well, we're going to now make it harder, riskier, less rewarding.
We're going to deploy in these five zones.
We've got to make it more obvious and recognizable and credible to deter the would-be offender, the bad guy, if you will.
But wait a minute, how much does this cost to purchase, to deploy, to operate?
And that's now an ROI calculation.
So you see this interface of very scientific research.
Now we're going to go into almost this engineering area and say, all right,
we've got to make this human engineering factor, what we call an army, you know, man printed so that you have a good ROI.
And I think that's where you're going, Tom. And we're trying to circle back to you here.
But it has to be cost effective as well as efficacious.
In other words, work and work well.
Yeah, thank you. I mean, I think one of the things and this this will kind of be my next question for you guys, is that I am obviously well versed on what you guys do at the LPRC because it's been so many years now.
But things like scalability, when I'm sitting in a boardroom or when I was, it was how do we scale this, how do we create an ROI?
create an ROI. And sometimes there's not a clear understanding of, oh, I have a time and motion so that I can show this right now, what we're going to save, not only from a money standpoint,
but hey, we're going to make lives easier. It's going to make the end user be able to do something
quicker, faster, better. And at the end of the day, our associates are going to be happier
and we're going to solve the problem or potentially prevent it or mitigate the risk.
So that was really where that question came. And this kind of goes to my next question. And again, more of a clarifier, you know, Stuart,
you did an excellent job of explaining things, but what are, you know, what are some of the theories
that the LPRC relies on or you rely on when you're doing research? And if you can, for the audience,
break it down and kind of like you did before the one one sentence afterwards after you do the scientific example of hey for layman this is what
that means yeah so i think a lot of our um a lot of our research is rooted in what's called sort of
like more economic theories of criminology um this uh guy Becker, a lot of it sort of routed in that.
And a lot of it is what's called routine activity theory as well.
I mean, the idea behind a more economic approach to criminology is, you know,
would-be offenders, they exercise a sort of cost-benefit analysis, right? Whether or not they're going to do a crime,
if the sort of benefits outweigh the costs, they're going to do it, right? And routine
activity approach talks about, well, sort of how the decision to commit a crime occurs
and sort of how it occurs in space. So routine activity theory basically says, you know,
someone isn't at home and sort of comes up with the idea,
I'm going to go to the store and I'm going to shoplift steaks, right?
And then they don't go out, go to the store, come back.
Routine activity theory basically says people commit crime
during the course of their routine activity, right?
If they see an opportunity,
they're going to commit a crime. So we take a lot of these theories to heart. And then,
you know, so a lot of what we do talks about how can you alter the environment so when people are
engaging in routine activity that they don't find any targets of opportunity, right? If every person,
every offender especially, or potential offender, is making this cost-benefit analysis.
How do you help nudge them?
How do you help nudge them towards thinking, hey, the costs are going to outweigh the benefits?
So those are two of the theories.
I mean, the first one's called rational choice theory, the idea that we sort of make these cost-benefit analyses, and that's how we decide to do something.
And the second one's called the routine activity approach, the idea that we are going to sort of,
if you're going to commit a crime, you're going to do it during routine activity,
look sort of for targets of opportunity. Now, I have to say that there have been a lot of
challenges to these, especially the rational choice theories, right? Other things matter,
things like, you know, identity, things like, as I mentioned before,
the sort of more psychological concepts of self maintenance.
How do I maintain the notion that I'm a good person?
And I know, Reed, you've done a little bit talking about methods of neutralization.
I know that's some really interesting research there.
Yeah, so it's a really good, I think, point is that we're dealing with humans.
We're all different.
We have different genomics.
We all have different parenting that we've gone through, such as is we have different peer groups.
We pay attention to certain things and respond to certain things, and we're all different.
We're very homogeneous in that way.
But so we have to deal with that.
But one thing through systematic research has been found that if you're normally socialized,
you don't suffer from severe psychopathy, for example, then you're probably going to
feel some sense of guilt at some level.
And that may or may not guide you, but that's there.
And so one thing that's been looked at a lot is before you do something,
the fancy term is a priori, but some people will neutralize their guilt.
We rationalize.
We try and justify something.
We might do it during an attempt, a crime attempt, and we might do it afterwards. But, you know, hey, this company owes me. I work long and hard. They're not paying me
enough. They're paying other people more than me. And so, you know what? It's owed to me. And that's
a rationalization, what we call a neutralization technique. They're saying, you know what?
There's a metaphor of the ledger here. It's my turn.
They owe me.
We're pretty in the red here, so I'm going to act on this.
So there's a heavy psychological component as well as sociological, how you relate to others.
And in that place where you work or you're visiting or in the neighborhood you move around in under routine activity.
around in under routine activity and I think rational choice by the way many of us in criminology have tried to expand that beyond a pure economic model that you know risk and reward hey this is
worth a thousand dollars and the reward of me getting and the risk of me getting caught and
severely punished is here you know the calculus isn't that neat and clean never has been never
will be um it's uh but the risk could be shaming or I feel embarrassed
and so you can neutralize your guilt so you can see how these things interface rational choice
is pretty psychologically saturated as a term and so again we don't want to go too far into the
weeds here but everything we do our team tries to look at that hey let's look at this environment
what are they trying to do, sell more?
Oh, and lose less.
We're trying to have a great place to shop and work and visit.
And so we're here to try and maintain that homeostasis, maintain that stability, and reduce their theft, fraud, and violence there.
Now we've got a specific issue that's popped up.
So how do we deal with that?
And so you have things that are built into that environment to do that,
sort of this basic native immunosystem there,
and then you may have things that we've got to adapt to.
So anyway, that's the way we look at things,
and that's our job working with you, Tom,
and your fellow practitioners out there
in the field to say, all right, you got a real world problem. We're going to help you work on it.
We're going to sit around and read and think about and study in the lab and then in the stores in
Gainesville, Florida, and then we're going to roll it out in a chain here. So let's go over to you,
Tom. What else? What else you got? So this is my last question or kind of comment and stool.
I know that there are challenges that are faced sometimes in research.
And I know in the studies I've been involved in, there have been some trouble gathering data from retailers because there were concerns about confidentiality.
concerns about confidentiality. Can you just, either one, just explain briefly about blind studies and how you can protect that data while still being able to provide the results to
a greater audience? Yeah, absolutely. I think you're completely right. I mean, so we're in a
domain here, it's business, right? And your organization is not going to want to give another organization a leg up in any way.
You're not going to want to show your cards, as it were.
Right. So anything you any data you have, you want to keep as confidential as possible.
So we're sort of at this, you know, I want to say crossroads here.
That's probably not the right word. But we have a situation in which retail loss
prevention needs to cooperate with one another. Yet the retailers themselves, of course, are in
competition. They're in competition for sales. They're in competition for market share, all that
sort of stuff. So part of the problem that we have is that if retailer one gives us data, they do not
want us to share it with retailer
two, and they shouldn't. So ways we overcome that is the obvious one is if we do a study,
a lot of times what we'll do is we'll anonymize the study, and we won't give any reference to
the retailer. Oftentimes, we'll pool data and anonymize it. So if we're getting data from six
different retailers, we sort of mix that data up and we
show relationships between different elements or what we call sort of variables within that data
without reference to where the data comes from. It's all sort of mixed together in this case.
So that's a lot of what we do. Beyond that, we have sort of legal protections, like we sign
NDAs with our retailers, and we're really careful to
sort of, you know, check with them to make sure that whenever we do share data, they're comfortable
with it. So that's one of the ways that we overcome that problem. So yeah, the other, and you know,
there are a few other issues when you're conducting research. Again, since a lot of what we do is very applied, you know, Reid mentioned before, sampling, measurement, and analysis,
there are all sorts of problems with that. And I'll add one as well, the idea of what we call
external validity. So first of all, when it comes to sampling, Reid mentioned before this idea of
probability sampling. Just a brief primer on that. The idea of probability
sampling is essentially, if you have a population, just say U.S. citizens, a probability sample is
one where each citizen has the same chance of being picked as every other citizen.
There's a lot of, basically, assumptions that you can make about that data. And those assumptions allow you to
make really cool inferences. That's very hard to do when you have a sample, is to get a pure
probability sample, because some people are going to be easier to reach than others, right? So, for
example, let's take one of the populations that we have to deal with a lot, and that's the offender
population, right? And population is basically who you're studying, essentially, right?
So the offender population is obviously tough to reach.
Many of them are going to be incarcerated.
Many of them are going to be very unwilling to speak to anyone
that they may assume are police
or they may assume are going to sort of collect information on them.
So it's difficult to get a probability sample.
So we have to sort of be crafty about it sometimes.
We have to either find ways to get more representative samples,
and there's other methods to do this, something called cluster sampling,
where you get sort of different clusters of people, or stratified sampling,
where you get representative samples of each sub-demographic you're looking at so for example
if you were looking at men and women and you know that men and women are equally represented in the
population you would make sure to choose half men half women but oftentimes we have to deal with
convenience samples convenience samples are you know, we're recruiting offenders, you know,
so oftentimes that's what we'll do.
But, you know, it's harder to make inferences with those samples.
So that's one of the challenges we face.
The other one is measurement.
As Reid mentioned before, how do you measure some of these phenomena?
A perfect example here is going to be shrink.
Most retailers do an
end of the year audit. The great thing about that is you have that information. The bad thing is
it's for last year and it's only done once. So, you know, that'll be difficult if you're trying
a new technology out for three months. How are you going to get that official information? Sometimes we do
cycle counts. That's one way to get around it, where we basically count what's on the shelf,
and we do sort of an audit that way. But that's imperfect. So, you know, that's another difficulty
that we face. The other thing is, you know, how do you measure deterrence? You have to get really
inventive when it comes to measuring a lot of these essentially concepts, right?
This is the problem of operationalization, right?
How do we make sure to find physical manifestations or, you know, fungible manifestations of these phenomena that we're trying to measure?
Like how much someone is going to be deterred?
You can't just ask someone, how much someone is going to be deterred. You can't just ask someone,
how much are you going to be deterred? You know, that, you know, you have to come up with scales,
you know, on a one to 10 scale, how much, how likely would you be to commit theft in this store
with this technology? That's just a sort of bare bones example. And then the final thing is this idea of external validity so uh um let's just say you're doing a
medical experiment uh and you are uh testing out this new drug you take a thousand people at random
and then of those thousand people you randomly assign 500 people this new medication and the
other 500 people you assign a placebo, essentially a sugar pill.
And you want to say, well, what's the difference between these two samples,
one that got the pill, one that got the placebo? And you measure it, and you find that the pill is effective. It reduces incidence of this sickness, 20%, 30%, right? But the problem is this. That was under lab settings,
right? You had someone administering the medication. You had someone watching the person
take it, right? What if it's, I mean, I hate to use such a, you know, an example that's sort of
elementary, but what if the medication tastes bad? This is, you know, sort of elementary. And what if your population were children, right? How can you make sure they're actually going to
take that medication when they sort of use it in the real world, right? How are you going to make
sure that there aren't other, I guess you could say, factors in the real world that are going to
limit the effectiveness of that medication? That's external validity, right? The idea is,
well, we have something that works in the lab, but how do you make sure it works in the real world?
And that, I think, is a lot of LPRC's task, is making sure that, yeah, we test it in the lab,
it works, but does it really work for retailers in their environments with all the other things
that are going on.
So that's one of the main challenges we face.
Yeah, and so as we sort of wrap up, and I think to build on this very quickly,
is that most of our research, Tom, is place-based.
We're comparing how treatment or something we're doing works compared to places that we're not. And so if it seems to work in
the treated locations and it doesn't, and we see the drop in theft numbers or whatever, we might
have multiple measures, which is even better. We're seeing shrink numbers go down. We're seeing
sales numbers go up. We're seeing reported theft go down, apprehensions go down. You know, you have
multiple measures sort of telling the same story
then it looks like there's some internal validity there's some causality here this this treatment seemed to affect this concept construct that we're trying to work on here uh because it didn't happen
in the in the in the um control sites um and they were randomly assigned and so on but um and we
have pre and post before and after measures here um
the external steward said hey uh would what we found in this test of say 40 that got treated
and 40 that didn't can we extrapolate that to other places and times that's external validity
so internal hey does there seem to be some cause and effect relationship and how strong and then
external can we great can we do
this elsewhere and probably find the same thing and so that's there are a hundred things we can
talk about i think probably an upcoming podcast will go a little more through you know exploratory
versus confirmatory and all these things that are going on but i think today we'll kind of wrap it
up and suffice to say that um we are very very respectful of our LPAP and law enforcement practitioners, all you're dealing with out there.
Our team is designed to support you by conducting rigorous research that helps you find better and better treatments that cause very few side effects, if any, and that are cost effective or there's a cost benefit there.
And do it in a way that everybody feels good about. And we do this collaboratively with the
60 retail chains that are in the LPRC community. So more to come. And again, we'd like to make
sure that all of our listeners and LPRC community and entirety here feels very,
very welcome to come to 2018 LPRC Impact, our conference that's held every year, typically at
the University of Florida on campus. We expect three to four hundred LPAP executives and others
to participate. We've got 10 learning lab breakouts that should be really neat where we're
going through some breaking research and discuss great how do we use this in our environments
to make money to grow our business to make it safer and more secure and all kind of other neat
things of mad scientist gamification we've got two really cool uh events and so forth, but it's a great place to come,
spend time learning and sharing and having a blast with the retail LP and restaurant community,
as well as law enforcement.
So with that, Tom, me, and I want to thank you for your co-directing here.
I want to thank our guest today, Dr. Stuart Strom of the LPRC research team, and of course,
our producer, Kevin Tran of the LPRC.
Everybody have a great and safe week and stay tuned.
Thank you.
Thanks, everyone.
Thank you, everyone, for tuning in to this episode of Crime Science.
We also want to thank Bosh again for making this podcast possible.
If you would like to suggest topics for future episodes or provide feedback, please email kevin at lpresearch.org. See you next time.