LPRC - Episode 20 – The Scientific Method Applied to LP/AP ft. Dr. Stuart Strome of LPRC
Episode Date: March 1, 2019The post Episode 20 – The Scientific Method Applied to LP/AP ft. Dr. Stuart Strome of LPRC appeared first on Loss Prevention Research Council....
Transcript
Discussion (0)
Hi, everyone. Welcome to Crime Science. In this podcast, we aim to explore the science of crime and the practical application of the science for loss prevention and asset protection practitioners, as well as other professionals.
Co-host Dr. Reid Hayes of the Loss Prevention Research Council and Tom Meehan of Control Tech discuss a wide range of topics with industry experts, thought leaders, solution providers, and many more. In this special episode, LPRC research scientist Dr. Stuart Strom, before transitioning into his new career opportunity in the DC metro area, reviews various scientific
methods, the ways that LPRC scientists solve problems, and how these are both relevant and
beneficial to LPAP practitioners. We would like to thank Bosch for making this episode possible.
Take advantage of the advanced video capabilities offered by Bosch to help reduce your shrink risk.
Integrate video recordings with point-of-sale data for visual verification of transactions
and exception reporting.
Use video analytics for immediate notification of important AP-related events, and leverage
analytics metadata for fast forensic searches for evidence and to improve merchandising
and operations.
Learn more about extending your video system beyond simple surveillance in zones 1 through
4 of LPRC's zones of influence by visiting Bosch online at boschsecurity.com. Welcome everybody to another episode of the
LPRC's Crime Science Podcast. Today I'm joined by my co-host Tom Meehan, Chief Strategy Officer
for Control Tech and longtime LPAP practitioner as well. Our guest today is Dr. Stuart Strom of
the Loss Prevention Research Council, LPRC.
And what we're going to be doing is touching a little bit on using science for better results
in the asset protection or loss prevention field.
And so what I'd like to do again is, Stuart, if you could give us a brief background, sort
of how you got your training, and then we're going to go into a little bit about how that training has been helping us and, most importantly, our LPRC members, particularly practitioners.
Well, thank you so much, and thank you for having me.
So my background is in political science.
I received my PhD from the University of Florida in 2014.
I received my PhD from the University of Florida in 2014.
My fields of study were international relations in particular, international security.
But I've worked on several different projects that are policy related, including food security issues.
And now, of course, most recently, criminology, loss prevention, asset protection.
So that's my background.
Right. Excellent.
So what we'd like to do today, Stu, is just ask some basic questions, but some questions around the use of the scientific method and what those components are
and how they help us out, how they help guide us.
So maybe describe the scientific method with a bent toward LP and AP. Absolutely. So, you know, the first thing I
want to say is science and the scientific method in particular basically just systematizes our way
of thinking. It allows us to approach problems in a way that we could sort of isolate causes,
that we can identify how to measure phenomena in a way that
can lead us towards answers, right? We always want to make sure that any sort of scientific
endeavor that we do is reproducible. We want to make sure that people, no matter where they are,
can take the experiment that we do and define the terms, measure the variables in such a way that
they can independently verify our results. But, you know, before I get started, I do want to say
that we shouldn't ever really restrict ourselves to thinking of science as this method that's a
step-by-step thing. Really, all science is, is, again, I say this,
systematizing one's thinking.
But with that in mind, the scientific method,
as it's sort of classically imagined,
consists of a few different parts.
The first one is identifying a research question.
This is just sort of a general broad outline
of what we're trying to do, what, what phenomenon
we're trying to study, um, what problem we're trying to solve.
Um, this can be much tougher than it seems, right?
Um, so if we look at this from an LPAP perspective, um, you know, several times I'll, uh, I, I
facilitate a working group calls and we'll go around the room and we'll ask folks, what issues are you facing, right? And of course, inevitably, it's always like, well,
we want to reduce theft in our stores. Well, that would lead us inevitably to the research
question, what causes theft? But this is entirely too broad a question. There are several different
factors that go into what causes theft. So we have to be able to sort of isolate what we're talking about here, right? There's some human factors that are causal factors for theft,
right? Things like, you know, how good is your management team? How engaged are your employees?
There are environmental factors. Are your products, you know, are your targets hardened,
right? Do you have things like cameras? Do you have product protection measures? So we
want to always try to narrow down the research question to a scope that's manageable to study.
So that's the first thing. And it's more, narrowing down that research question sometimes is more of
an art than a science. So that's the first thing. From there, we want to do what's called hypothesis generation.
Moving from the research question, which is more general to hypothesis generation,
hypotheses are essentially associative or causal statements
with measurable independent and dependent variables, right? So I can give you
an example of a bad hypothesis in loss prevention asset protection, and then I'll give you an example
of a good one. So a bad hypothesis may be, okay, we're trying to see if, let's say product
protection measures. If product protection measures stop theft.
A bad hypothesis would be, oh, product protection measures work.
Well, that's horribly vague.
What do we mean by work?
And for that matter, what do we mean by product protection measures?
Are we looking at a store, sort of a store basis, a product basis?
So a better hypothesis, we really want to clarify what we mean here, right?
So a good hypothesis, maybe not a perfect one, but a better one.
Stores using product protection measures have lower rates of theft for product X than stores without product protection measures, right?
We have a sort of an independent variable here, something that we're varying, stores
with or without product protection measures. And we have something that we're measuring, stores with or without product protection measures. And we have
something that we're measuring, a dependent variable, right? In this case, it would be
rates of theft. And this is something that we can measure. We can measure shrink. You know,
we would supposedly have statistics on this in this experiment, right? So again, you always want to have that independent
variable, something that's causing the dependent variable. So independent variable is the causal
factor, the thing that's causing, and dependent variable is the thing that's being acted upon,
right? And from there, it's conducting the experiment. We'll get into this more in a bit. Experimental
design can be somewhat treacherous. It can be difficult to eliminate all the potential factors
that may affect the outcome or the dependent variable and just isolate that one thing that
we're trying to study. So you design an experiment, you conduct the analysis, right? And then at the end, and this is key, and this is
why the scientific endeavor is continuous, right? You use those results to reinform your initial
research question or the initial theory that you used, right? Let's just say, for example,
you found that product protection measures resulted in lower rates of theft for stores
that use them as compared to stores that didn't.
That's cool. Great. We know that. But do they work on all different products?
Why would they work on some products and not other products? What about the implementation
of them, right? Do they work better in certain stores than other stores? They work in bigger
stores, smaller stores. This brings to the fore all other types of questions.
And that's the sort of scientific endeavor.
It goes in that circle.
It's circular.
It always is re-informing.
You're using results to re-inform and refine your theories and hypotheses.
So what we'll do is, certainly not for Tom's sake, but for maybe mine, we'll lighten it up a little bit.
And what we'll do is talk a little bit about, okay, the LPRC was founded by 10 major retailers,
as we've talked about before, in the year 2000. And the idea was, you know what, we're losing
billions here collectively as an industry. We're having people injured, even killed,
We're losing billions here collectively as an industry.
We're having people injured, even killed, in crimes that are occurring on our properties.
Some started here.
Some ended up here.
And at the end of the day, we're all trying to get better and do better and have better results. But our main tools right now are kind of just our own experience and benchmarking with each other.
There's got to be other ways that we can
add to that and so that's where science comes in and as uh stewart you've been talking about that
means i think this is a problem i'm going to define it um i'm going to come up with a question
and i'm going to come up with a question that's hopefully answerable or addressable, and then I'm going to test it.
So, Tom, I was going to go over to you and get a little insight.
You know, what's something that you're familiar with that LPRC has tested,
and how did that help you?
Let's go back to your practitioner days at Bloomy's or the Home Depot or other places.
Yeah, so the one that comes to mind is probably one that is the ePVM or the PVM study. And I mean, I feel like it was years ago that I first heard about it
and the evolution. And I think it actually did start partially when I was at Home Depot,
you know, I guess, 18, 19 years ago, where there was some talk about, hey, we use these big
monitors, what happens if we place it differently, and we didn't know what to do. And we,
for lack of better words, we're throwing everything at the wall and not really being
able to identify what worked and what didn't work. So fast forward a little bit, when the Loss Prevention Research Council was formed
and really took this on, I actually got to see that iteration and how it affected Home Depot.
And then switching over to Bloomingdale's, how it affected Bloomingdale's, where at Bloomingdale's
in an upscale environment, we were very, very cautious to put in public view monitors because
it was upscale and what the perception would be and how it would affect the design. And using the knowledge base of the Lost Bridge Research
Council, we identified that using, in our environment, the smaller monitors at eye level
were more effective than these big, giant, ugly monitors. So we were able to implement them in
high-risk stores. And really, using the studies and information at the LPRC,
introduce it to senior management and explain that this isn't just, you know, we're not just
shooting from the hip. This is something that's been tried and tested. And if we do this, it will
have a positive impact on shrink and shortage and ultimately have a positive impact on customer
service because the product will be here to sell. That's the one that I think is the best example,
mainly because I remember what it was like before the LPRC was involved and us trying all these
things and having good results, but not necessarily knowing what worked, what actually worked. Did the
monitor work? Was it placement?
Was the attention that we were giving to the store work? We really didn't have an idea. And
I think in general, LP, AP practitioners are really good at solving complex problems. But I
think when you use the menu of everything that's available and you don't know exactly what helped
the problem, you spend extra
money and extra time. So that EPBM one was really the one that resonates. I mean, there are so many
that I can think of, but that's probably the one that is the most realistic and the most tangible
for me. And again, like I said, remembering what it was like before and after and the difference
of going, okay, well, this worked here. This is what the study showed.
This is what the offender said.
Let's do it.
Fantastic.
I appreciate that, Tom.
And that's a big part of what we do, right, is you know this.
We come up with a kernel of an idea as Stuart's been talking us through.
And then what we're going to do is we'll test that and see what happens.
But we're not going to give up if we don't see a result or a significant result or a good result. So in that
case, what we're trying to find out, of course, is how do we get the result we're looking for?
So that's where we can start to dial things in a little bit. So let me go back over to you,
Stu, and let me ask you, you know, based on some of that initial discussion, you know, what are some common research methods?
You kind of touched on experimental design.
That's where we're going to decide what we're going to do, where we're going to do it, and when we're going to do it, and even a little bit about how we're going to do it and measure it.
But what are other methods as well?
Well, so, you know, there's a few, as you mentioned.
So the first, I'll start with, I guess, what can generally be considered the most rigorous one.
But, again, I always want to state that you choose the research design that's appropriate to the question you're asking.
And that is sometimes an art rather than a science in and of itself.
But what we strive to aim for is the most rigorous research design. And that's the research design that can isolate that causal factor as much as possible and eliminate the effect of all the other factors that could be
causing that dependent variable. So for example, if we want to look at, you know, do product
protection measures reduce theft? Well, we want to eliminate all the other possible things that
could cause theft, that could sort of influence theft,
right? Maybe there are differences between different stores in terms of management or
employee engagement. Maybe there are differences in terms of the overall crime rate of the stores
themselves. Are they in a more crime-heavy area, right? Maybe there are socioeconomic differences
between the stores. So the best design to eliminate all these factors
and just isolate that one factor, in this case product protection measures, would be what's
called a randomized controlled trial. And a randomized controlled trial, basically what it
does, it isolates that one factor by randomly assigning the treatment, the product protection measure in this case,
to the subjects, the different groups, right?
And what that does, it eliminates, by basically randomizing it,
it eliminates all the other confounding factors.
So if it's done right, at least theoretically,
it should be able to account for all those other factors that may influence theft and just leave the effect of that one individual factor, the product protection measure, to be analyzed.
Now, full or sometimes called a full experimental design, a randomized control trial,
these are great. They're also very expensive sometimes. They also often require what's called
a large sample size, several different stores, so you can basically eliminate all the other
confounding factors. The idea is the groups that you choose should only differ on one thing,
and that's the assignment of the treatment. But sometimes we can't do full experimental designs
because they're too expensive, they're too time consuming, and sometimes they're fraught with problems of their own, but we'll
get to that in a moment.
So sometimes we'll choose what's called quasi-experimental designs.
And basically this is something similar to an experimental design, although usually it's
a non-random selection of participants or somehow non-random assignment of treatment.
And what we'll try to do with quasi-experimental designs is through the selection of our subjects,
in this case, let's say stores, we try to account for all the other confounding factors.
And again, when I say confounding factors, I mean things other than the treatment,
in this case, product protection measures,
that could be causing variation, causing that dependent variable to vary.
So quasi-experimental designs are a really good tool in our toolbox because they can be done
more affordably. They can often be done quicker with fewer resources.
Oftentimes, you know, retailers don't want to trial a solution randomly in 300 different stores.
It's just, it's too much, right? It's the money. Exactly. It's the money. It's the risk sometimes, too, right? It's the risk. So quasi-experimental designs are a way to get around that, right?
Oftentimes, this is a type of quasi-experimental design.
You have what's called natural experiments.
Natural experiments are kind of cool because sometimes, just purely by chance,
you will have a factor that you want to sort of study randomly vary, right?
So let's just say, for example, we get these a lot in political science, and it's applicable to
criminology as well. Let's just say you have a town that sits on the border between two states,
and those two states have different laws. Well, now you basically
have something that varies only arbitrarily because there's a line that arbitrarily goes
through it that says it's Florida versus Georgia. And if the laws differ in those two states,
you can basically assess the independent effect of those laws while controlling for everything else.
So those are the kind of experimental designs.
Sometimes we want to know a little more about things like motivations of people. And that's
where stuff like survey research comes into play. Survey research is good at basically,
you know, identifying why people do what they do. They're sometimes, they can be just as difficult to, to implement, uh, as experiments.
Um, but, um, they're good for, for studying humans, um, and, and studying things like motivations.
Um, but they can also be applicable to things like, um, you know, the same sorts of issues
as experimental designs as well. Um, so there's all sorts of different designs that you can choose. Sometimes if we want
to basically undergo a more what's called exploratory analysis, we want to maybe find
out what the problem is itself. Sometimes we don't know that when we're doing scientific research,
we're trying to better understand the problem. We'll do things like focus groups or more what's called qualitative research or even interviews. This can be useful when, you know,
the phenomenon isn't amenable to sort of what's called like large end survey research, where you
want more in-depth answers to questions. And you really want to sort of, you know, explore an issue by basically asking follow-up questions.
So we can do a lot of things like, sometimes we call them elite interviews, in-depth interviews,
or sometimes it's called focus groups. I know a lot of retailers are probably very familiar
with focus groups, especially the marketing departments. They're used quite often. So,
you know, I just want to conclude by saying sometimes you'll hear things like, well, the randomized control trial is the best thing.
It's the most rigorous research design. And that is true in a lot of cases. But we also want to
be careful to make sure that we choose the research design that's appropriate to the
questions that we're answering. And therein
lies the rub. So that's the important part. Excellent. Tom, any questions you've got right
now or anything to add? There's different ways to do research. What are some of the ways
in your background that retailers have used on their own to find out a little more about a problem or a little more about a solution?
Yeah, I think when you don't know about the LPRC or you don't have exposure to
any of the methods of it, a lot of it is trial and error and trying to create a return on
investment. And then I think the other piece is, you know, going with your gut or your emotion.
So taking your experience and, you know, when I always used to say this and coming up with probably
a relatively good hypothesis that has a lot to do with environment, but then not really being able to
run through. I know in our past, whether involved with the Loss Prevention Research Council or not, most big departments have folks that are data guys.
What I would say is there's probably a good split of data folks that are the really smart guy on the LP team, becomes a quasi-data expert by researching and reading and Googling.
and reading and Googling. And then you have some departments today that employ data scientists or mathematicians in their department because they realize that there's a benefit there.
I think both of those, whether you do it on your own or hire a professional, I think both of those
are great. But I think one of the disadvantages is your sample size is limited. You only know about your environment.
You only know about your customer.
And you really – you lack some of the depth.
And when I think about the Lost Rancher Research Council, one of the things I always go back to is whether you do a full-fledged research project with Lost Rancher Research, or you just go through the studies that
are out there, you generally can find some real, real data and fact-based research to help make
your decision better. And I found in my environment, I was fortunate. At one point, we did
have a data scientist and we did have a mathematician. And all that did is help us work closer with the
LPRC to say, okay, this is what we're seeing in our environment. What's happening everywhere else?
And what are some of the tools, tricks, or behavioral methods that can help curve the theft?
The other thing I would say is while, you know, when you think of the base of
our membership or just retail in general, while some retailers come from different planets, in my opinion, in my experience, the way people steal doesn't change much.
So if you have an upscale environment or a convenience store, the theft itself, the behavior behind it, and the deterrent methods don't change a lot.
itself, the behavior behind it, and the deterrent methods don't change a lot. Obviously, you know,
there are some physical security things that are different, but I think that that's something that I continuously go back to. So let me kind of build on that, if I could, guys. And in this case,
talk a little bit about, you know, Stu, we start somewhere. We think something's happening for a reason.
And many people call that a framework.
That's what we generally do.
But also, obviously, in the research world, the science world, we call it a theory.
And I know it's supposed to be in the way we use it, the way things are happening.
And so we're, all of us and all around the world trying to understand how the
world really works. That's a theory. But could you go into a little bit more about what a theory is
and then why would we want to use a theory to solve a problem instead of just trying to solve
a problem? Yeah. So the first thing I'll say is I don't think it's possible to think of a problem
absent a theory.
A theory is something that gives us a framework on how to think,
that even if we assume we're thinking without a theory,
in the background we have some basic assumptions about how the world operates.
And really all I could say about a theory is there's no standardized definition.
It's perhaps impossible to come up with one.
But what theories do is they simplify
a very, very, very messy world
by telling us what things we should focus on,
what things we should foreground,
what things we should put in the background.
Because if we take something as complex
as a retail environment, right,
there's so many things to pay attention to.
There's so many things that you have going on.
Even something as relatively narrow, retail theft, there's so many things going on.
What are individuals' motivations?
Why do they steal?
Are there environmental factors that go into it?
Those motivations themselves, are they rational? The would-be thieves, are they all similar or are
they different, right? Is it nature or is it nurture, right? How do they think about it? There's
so many questions that you have to ask. A theory just gives us a basic, basic framework how to
think about these things. So for example, we'll take Tom. You said that for the most part,
So, for example, we'll take Tom.
You said that for the most part, all would-be offenders are pretty much the same.
That is something that, like, so if we thought about that question, we'd ask why.
Do they have the same motivations?
Are those motivations, let's say, economic motivations, right?
And that would be something like what's called rational choice theory, where basically rational choice theory says, look, we're all kind of economic actors. We all make these cost-benefit
analyses, right? And if you want to stop would-be offenders, you just have to alter that. You have
to alter that cost-benefit analysis, right? Now, rational choice theory, there's competing
theories, to be clear, the rational choice theory.
The more sort of recent behavioral psychology theories challenge rational choice theories
by saying, we don't always operate according to this cost-benefit analysis.
Sometimes we have trouble calculating this cost-benefit analysis.
And the thing is, we do it in predictable ways.
That in and of itself is a whole different theory. It comes
with a whole different set of assumptions about how we operate, how our brains work. So theory
is just something that gives us a baseline. In this case, a baseline of human behavior, right?
A few assumptions that we, I don't want to say necessarily take for granted, but a few assumptions
that we have to start with about human behavior. Now, of course, those assumptions are always
subject to revision. If you have a scientific study that places those into question, that's
how rational choice theory got placed into question. You know, a few guys in the 70s started saying, it seems to me that people don't always
operate according to this cost-benefit analysis. We assume they do,
but is that assumption warranted? So theory, again, just sort of foregrounds a few things,
puts a few things in the background about our human behavior. And, you know, the theory by which you operate is super
important. And oftentimes we just don't consider it. We don't interrogate ourselves and ask what
theories operate in our background, theories of human behavior. Excellent. So let me go back to you, Tom. Think about, if you would, how you and your past have formulated some kind of hypothesis.
It may not have been a full-on, full-blown theory, much less a rigorously supported theory with all kind of evidence and from different researchers and places and times.
But you're trying to make sense of the
world. So, you know, you looked at it and like Stuart said, first thing I want to do is I want
to kind of think about, well, what's going on? Why? You know, who, what, when, where, why, and how?
That's a theory. So that I can do something about this rather than I'm just going to start doing
stuff or I'm just going to do what the guy down the street's doing and then be done with it.
Can you kind of throw out a time, Tom, that you've done that? or I'm just going to do what the guy down the street's doing and then be done with it.
Can you kind of throw out a time, Tom, that you've done that?
Yeah, I don't know that I'm cut from the same clock just because I've worked with a lot of really smart people. And I was really lucky to, over the years, have just phenomenal leadership everywhere I went.
But I think it starts with a problem, you know,
and then having an actual problem and identifying.
One of the things I always said, is it actually a problem
or is there emotion behind it?
So even in my earlier years,
there was a perception of a problem that wasn't.
So once, you know, identifying that there was an actual problem,
going out and gathering that information.
And for lack of a better word, sometimes it was getting into the weeds of what's causing that problem.
So going out and collecting information, information from a number standpoint, inventory information, talking to people,
and then looking at that info or data.
And when this started, Excel wasn't fancy.
When I started, the collection of data was very different. It really was go out with a clipboard and do an audit and then write it down
and then formulate that data and communicate, okay, this is what we're seeing over there.
The observation piece, you're always looking for something first.
So that's where looking at the problem.
Then after getting all of it, talking to everybody and then coming up with a conclusion of what you think causes the problem.
I mean that's the hypothesis period.
So it's really, is there really a problem looking at it
and then collecting the data and then drawing who's okay.
This is what I think is causing it.
So if it's refund fraud, which is a big one,
I can remember early on when I worked for Home Depot,
we still took, we still gave cash back without a receipt.
So, when I started, I remember
us looking at that and it was pretty, you know, okay, we have a problem here. And, you know,
my thought is if we stop taking, if we stop giving cash, that'll solve some of that problem because,
you know, we're breeding an environment. But over the years, I think adopting methods like the SARA method, or if you want to use a Six Sigma method,
or methods that you have a standardized approach, really changed the way I did things in my daily
work life, even before I was in a senior leadership role of really having a method or a process. And
whether you decide that you want to be, you know, go with
a more advanced method or, you know, so I took Six Sigma because I had to.
But if you wanted to take a simplified scientific method of observation, question, formulate
hypothesis, do an experiment, analyze your results and then build a model.
That was something we did in Bloomingdale's without any really scientific.
It's okay. Just basically answering all those questions. And, you know, the model was not
really a model. It was more of, okay, this is what we're going to try. And that's really where
the Loss of Revenge and Research Council changed kind of the way I thought about things.
And, you know, for years, I kind of mixed data with evidence until I really started getting ingrained with loss of interventional cancer that just because you had data that said it did – that it was something didn't actually mean that it was something.
So I think in the first 10 years of my career, I would have said this data means this is what's occurring. And really, now
I look at it completely different. And I would say that the biggest changeover was regardless of what
method, we used a method. So we used a method, and that allowed us to be consistent, it allowed us to
document things, and it allowed us to measure our results. I think one
of the biggest things early on in my career and even later on when there was necessity is we did
something, we solved the problem, but we weren't ever really able to measure what actually worked.
And a good example is, you know, I worked in an area that had an extremely high shrink in a
department and we threw the kitchen sink at it.
I mean, invested in technology, invested in people, and we virtually eliminated the shrink.
But when we went back after the facts, saved a million dollars and said, huh,
which of the 35 things we did actually attributed to the shrink reduction? Was it 20 of them? Was
it five of them? Was it one of them?
And that was kind of the awakening of going back to regardless of what method you choose,
you have to have a consistent method so that you can measure, run through, and make sure that,
hey, I'm running through. Because I could create an ROI model easily on that. I could say we
eliminated the shortage, we did all these things. But the reality is it wasn't very efficient. So going back and I'm not a scientist. I work with you guys regularly. So I used that methodology with everything. And today, I kind of mix things together.
But the school of thought today is whatever I do, I want to make sure that I have an adequate amount of information to collect so that if I'm making a decision, it runs through.
And I would argue that this works in every environment, you know, and now that I'm
out of retail and in manufacturing, it's exactly the same way where, you know, we're taking the
same approach. We don't go out and develop something because, you know, we think it's a
good idea. We go out and develop it by gathering information, by looking at it, experimenting with
it, looking at what our results are, and then, you know, creating a model
and deploying it. So I think long-winded answer, but that was a, you know, 20 years of in five
minutes. All right, good. I appreciate that feedback. So we've got a litany of theft and
fraud and violence issues that are here. They continue to change and evolve.
And I think that's the, if not a cry for help,
at least reach out by our at least 70 major retail chains
that are part of the LPRC community now.
And that the rigor, as Stu's been talking about,
what we're doing is we're trying to come up with a little bit of a systematic way to think about things, as we talked about, and to find out things.
And even to report them, we're working a lot on that.
How do we get that information to you all in a much, much more usable, easy-to-go-to-work-with format, that science-to-practice, S2P, if you will.
But what I'm going to do is kind of, we'll talk about evidence-based practice. We talk about that a lot on this podcast and in our
practice and work that we do with the retailers and others. You know, Stu, I'm gonna go back over
to you. Can you give me a quick look-see at a project that we've done. Actually, this project was one of several that were done
in a cluster. It was an experimental design, a randomized controlled trial in this case.
But we've done a lot of looking at it and now writing it up, writing up the study to submit
to a peer-reviewed journal, which is now that article has been accepted with some changes that they wanted made,
which is very typical to get it accepted.
You never get a full acceptance.
That's right.
They just won't do it.
Getting accepted is huge because that means the editor was good with it,
and that means that three reviewers, three peer reviewers were good with it,
each with some comments about, about hey I'd like to see
this or a little of that or change this and things like that so let's kind of go through the projects
Stu and tell us a little bit about that project and what we did here absolutely well yeah so the
the origin of this project was a few this was some of the data is a few years old, but several of our retailers were interested in sort of identifying whether or not these product protection wraps worked.
And a variety of products, particularly ones that were high theft or hot products.
So think to yourself, okay, put them in stores, see if they work.
Well, it's a little more complex than that.
There's all sorts of threats to what we call threats to validity.
So we had to make sure that first off, we randomly chose stores, and we had three different
retailer types, a home improvement store, a big box store, and a grocery store. And what we did was we said,
okay, we want to understand whether or not these product protection wraps work,
not only for these different stores, but different product types that the stores carried.
So the first thing we did was we took 20 stores within each of those verticals, each of those sort of types of stores, right?
So big box stores took 20 of those, 20 grocery stores, and 20 home improvement stores.
And basically what we said was we're going to further, so we're going to actually further
cut these groups up into high theft stores, medium theft stores, low theft stores.
into high theft stores, medium theft stores, low theft stores.
Now, the reason we did this was we wanted to make sure that our research didn't suffer from what's called regression to the mean.
Regression to the mean is when, let's just say, you choose only the highest theft stores
to conduct your experiment.
And then you put your treatment in those high theft stores, and you notice that, oh, we put the treatment in,
theft went away. Well, could have been the fact that your treatment worked. It also could have
been the fact that, you know, sometimes stores are just going through a bad period. Maybe they're
just having a problem with management. Maybe they're just having a problem with turnover.
Certainly never in any of Tom's stores, by the way. We just want to get that clear.
But we want to make sure that we're not suffering from what's called regression to the mean,
where you may have a temporary, that basically you may have sort of outliers that revert back to their mean state.
So we wanted to make sure to have high theft stores, medium theft stores, low theft stores,
and to see if they operated the same way among all those different types of stores.
So then from within those, so, you know, within those 20 stores from each vertical,
we further split them into like seven or six each, high theft, medium theft, low theft.
And then randomly within those, it was called strata, within those strata,
we randomly chose the stores to receive the treatment, in this case, the product protection
wraps. Now, we wanted to be very careful, as you always do with experimental designs, to make sure
every store that receives the treatment receives it in the same way. Do the folks know how to use
the product to properly put the product protection wraps on?
Are they doing it all in the same way?
Are they doing it for the same products?
Are they actually keeping the product protection wraps on, you know, through the entirety of the experiment?
The other thing that's very important that we wanted to make sure preventing prevent against was something called the Hawthorne effect. The Hawthorne effect is when your treatment, when your subjects act
differently because they know they're being measured, right? So let's just say we went to
all these sources that you are the subject of an experiment and we're trying to reduce theft by
putting these product protection measures on your products. And then it just so
happens that the employees started paying more attention. They maybe started being more vigilant,
right? They started, you know, approaching folks asking, can I help you? Instead of just, you know,
letting them do what they wanted to do. Well, in that case, you would see a reduction in theft in
those stores, but it wouldn't be the result of the product protection measures. It would be the result of that increased employee engagement. So we wanted
to guard against all these things. Now, to be clear, it's well nigh impossible to do so because,
you know, you can't be in, you know, altogether we use 60 stores, 60, excuse me, 30 treatment stores,
30 control stores. But we wanted to make sure at the very least to implement the design
in the same way for every store and try to keep as quiet as possible about the fact that this was an
actual experimental design. And from then on, you know, a lot of it was the analysis without
wanting to bore you. What we wanted to do was we wanted to, even with all the measures that we put in, we wanted
to guard against regression to the mean.
So we used a certain type of statistical analysis that was able to detect the effect of the
treatment as well as the effect of possible regression to the mean.
And after all that, after all of that, we still, of course, had several reviewers that identified issues.
And that's great.
I mean, the great thing about peer-reviewed articles is you have a lot of really smart people that are helping you really refine your study and think about things.
And some of the things that, you know, that, Reed, you were super helpful on was we noticed that some, um, some products witnessed
better results than others. For example, I think it was drills, electronic drills,
and the reviewers wanted to know why that was. And there's several possible reasons. Um, and this is
something for another study probably, but Reed, I know you did some extra work. What were some of
those possible reasons that you identified? Yeah. So, I mean, it's the type of product, as you said, the form factor, the size,
the desirability. And so, again, going to a theory we use that's come from situational
crime prevention and rational choice theory. Those are two operating frameworks or theories
that we use from Dr. Clark. And that just talks about why is something more frequently stolen
or more desirable to steal or take than another item.
And so we're going to look at, you know, crave is what we call it,
the concealability of it relative to other products,
and then the removability, but also the attractiveness,
how desirable it is, how readily converted to cash is it,
or desirable to own, to show off, or to stand is, how readily converted to cash is it or desirable to own to
show off or to stand out or to impress or whatever I'm trying to do here. So we looked at some of
those factors that come from the crave or that explained in part by the crave model. So again,
a practical use of theory to help us understand why, Why? Why did it work better here? I think the other part
of it, though, is the technology might interface better with that product. And so it's not only
why, well, this is a very highly stolen item. Another thing we protected really wasn't. So it
was, but all of them were really readily stolen. But this one, those products were better for a
wrap, you know, because the wrap would go around,, would stay on it, was less readily removed by trickery or some tactic that they've developed out there and then share on Reddit or elsewhere.
And so those are just some examples we go in and we're trying to understand.
It's always psychology and it's so much situational.
and it's so much situational.
But that's where our job is, to sit here like Dr. Strome is talking about and really think these things through, draw them up, talk to real-world offenders,
talk to you all, the practitioners.
Back to you.
Absolutely, yeah.
And again, Dr. Hayes brings up some great points.
And I should have included this.
So you mentioned that they were more effective on drills.
One of the other products on which they weren't as effective on were crest white strips.
And those aren't, you know, the packaging isn't as rigid.
You can just sort of, you know, you could open the package.
You could sort of take it out.
You take them out.
There's something you can just slip in your pocket.
You don't have to worry about the product protection wrap.
So absolutely, yeah.
about the product protection wrap.
So absolutely, yeah.
And that was, and you know, again, it was really,
the reviewers made us think.
Right.
And that was really helpful.
And it helps our retailers too.
Because when we start to think about these things,
it's definitely helpful for them. If you think about interactions,
that's one thing that Stuart's been talking about.
That's that interaction of the product
and the psychology around why people
steal that more frequently than others and then there's that the psychology
around the tactic or that technology but then there's that interface that
interaction of the two and so as you're talking about the packaging is more
malleable it's more the products more readily removed and concealed whereas to
walk down the aisle with a cordless combo tool kit is pretty tough to walk with five or six pieces of technology or of, you know, those construction items compared to sticking some white strips in your pocket.
So that's part of the interaction.
Yeah.
All right.
So what we'd like to do today is sort of draw to a close. I think I'm hoping that there was something out there for everybody. What we're not trying to do is confuse or overwhelm, but rather help people understand why was the LPRC founded? and has for now over 25 years with the National Retail Security Survey and other research to support this mission-critical group called AP and LP.
And that is because by all of us working together, better understanding what we're up against,
better understanding what we might cost-effectively, ethically, and more readily do about what our problems are,
we can probably get along faster, better
than otherwise.
So that's what the LPRC's mandate and mission are, is to solve real world problems, but
do it in a little more rigorous way.
And with the LPRC community, do it in a very collaborative way.
So I want to thank Dr. Strome for his time and expertise and the projects that he's been
working on in the supply chain, in the store, in the community. So he's been working in zone three,
four, five, even though I always call him the zone five guy, because he's always thinking about
on a more macro level, what's happening out there in the community. And can we measure that better
and better? And can we do something to affect what's going on out there before something happens here on that spot? So on behalf of my colleague, Tom Meehan, on behalf of our producer, Kevin Tran, I want to thank everybody for dialing in today again to another episode of Crime Science.
Thank you so much for having me.
episode of Crime Science. Thank you so much for having me.
Thanks for listening to the Crime Science Podcast presented by the Laws Prevention Research Council and sponsored by Bosch Security. If you enjoyed today's episode, you can find more Crime Science
episodes and valuable information at lpresearch.org. The content provided in the Crime Science Podcast
is for informational purposes only and is not a substitute for legal, financial, or other advice.
Views expressed by guests of the Crime Science Podcast are those of the authors
and do not reflect the opinions or positions
of the Office of Prevention Research Council.