LPRC - CrimeScience Episode 65 – Science & Evidence-Based Policing featuring Dr. David Weisburd (George Mason University) Part 1
Episode Date: February 17, 2021Dr. David Weisburd, Distinguished Professor at George Mason University and Executive Director of the Center for Evidence-Based Crime Policy, joins Dr. Read Hayes on LPRC CrimeScience to discuss scienc...e in evidence-based policing, the value and challenges of real-world research, place and crime event clustering, and much more. This is part one of two of the discussion. The post CrimeScience Episode 65 – Science & Evidence-Based Policing featuring Dr. David Weisburd (George Mason University) Part 1 appeared first on Loss Prevention Research Council.
Transcript
Discussion (0)
Hi everyone, welcome to Crime Science. In this podcast, we aim to explore the science of crime and the practical application of the science for loss prevention and asset protection practitioners, as well as other professionals.
We would like to thank Bosch for making this episode possible.
Take advantage of the advanced video capabilities offered by Bosch to help reduce your shrink risk.
Integrate video recordings with point-of-sale data for visual verification of transactions and exception reporting. Use video analytics for immediate notification of important AP-related events and leverage analytics metadata for fast
forensic searches for evidence and to improve merchandising and operations. Learn more about
extending your video system beyond simple surveillance in zones one through four of
LPRC's zones of influence by visiting Bosch online at boschsecurity.com. I want to welcome everybody to another episode of Crime Science the Podcast. Today I'm with Dr. David Weisberg. Now, Dr. Weisberg
has a few different appointments, but particularly is a distinguished professor at George Mason
University. Also the executive director there of the Center for Evidence-Based Crime Policy, which is really going to lead us into where we want to go. But interestingly enough,
before we started recording, I asked a couple of questions, but he's also the Walter E. Meyer
Professor of Law and Criminal Justice at the Hebrew University Faculty of Law in Jerusalem.
So really neat and exciting in that way. But a chief science advisor at the Police
Foundation. And so there's some nice ties in there. And of course, a fellow at the ASC,
the American Society of Criminology, and the Academy of Experimental Criminology, which again,
is a particular focus here today. But some really neat and inspiring, uh, awards and prizes that are very meaningful
to those of us that are in criminology.
And, and, uh, of course they always start with the Stockholm prize in criminology.
Um, but, uh, but I like also the, uh, the Israel prize, uh, which is, uh, such a high
civilian honor.
In fact, I understand, uh, the highest, highest. And then we could go on and on, but
different, some awards around the Campbell collaboration, which is critical to bring
together experimental research and making sure that we've got high quality information for people
out there to act on. So what I'd like to do, David, if I might, is go a little bit over to first this description of
science or evidence-based policing. And you may or may not know my father and grandfather being
physicians. I started reading their journals, believe it or not, and hearing the term as I
was growing up, evidence-based practice, and seeing that my dad being a primary care, every
time I'd go in his office, he had all
the journals open with paperclips and making notes. And of course, he had to read every journal.
But so I had an appreciation, but didn't link it to what the area we're involved in now. Can I
kind of go over to you, David, if I might, your take, your description as it stands now at the end of this bizarre year called 2020 of evidence-based
policing? The idea of evidence policing, as you noted, is drawn in part from medicine. When Larry
Sherman first raised the idea of evidence-based policing in a series at the Police Foundation
that I actually was developing at the time called Ideas in Policing.
So it comes from the medical idea in part, and the idea simply was that we ought to be making
decisions about practice based on evidence, not just on gut feelings, not just on what happened
before in tradition, but we should be making informed decisions using science.
And I want to emphasize that idea of using science. And the reason is because essentially
evidence-based policing is about the integration of science into policing. I wrote a piece with
Peter Nehru, and we called it science and and policing because we think that the essential ingredient
of the evidence-based policing movement is the idea of bringing science into policing.
And science includes evaluations of what the police do.
It includes basic research to understand the elements underlying the crime problem,
includes evidence about or studies of the services the police bring
and how they bring those services. And I should note that recently, Peter and I have written an
article in which we've said that to add to that, it also brings the science of ethics. For example,
medical ethics is a very important aspect of science and medicine. That's fantastic. And I want to ask you a
specific question if I could, David. With evidence-based, I wouldn't say it's thrown
around. It's used carefully and probably correctly. But to me, and just going back to the basics in,
say, grad school, the scientific model includes logic and evidence. And so I think in my talking with practitioners and working with them,
that's part of what we talk a lot about both. Hey, you need a framework, you need a logic model,
and you need evidence, not just evidence or not just a framework. What are your thoughts on that?
Am I on the right track or how would you advise and what are your thoughts around that?
Look, one thing that happens when people talk about evidence-based policing or evidence-based
medicine is they tend to focus on the importance of experimental evidence in drawing conclusions
about treatments or programs or strategies or practices. But science involves a lot of other
activities. As I noted, basic science to understand the mechanisms underlying problems so you can develop solutions that work.
It includes a whole series of different types of methodologies as well. And I think this is where some of the confusion comes from.
I once said to someone, I was talking about evidence-based criminal justice more generally, and they said, David, what do you think?
They weren't doing evidence-based work
before this idea came along.
We think we have evidence as well.
And this is one of the confusions.
What do you mean by evidence?
When I talk about science,
there's a fairly well-understood idea
about what constitutes good evidence.
And in that context, we bring these different approaches
because different approaches are used to understand problems and do something about them.
But we also recognize that there's, if you like, a hierarchy of evidence in science.
Some evidence is stronger than others, and that we accept that as we accept the general idea of science and policing.
No, good. I love it. We're not focused on experimental designs. We're focused on
better understanding why something's happening and then, okay, what can we do about it? And then
let's trial that. But there's this logic, what then the action mechanisms that create, I guess,
the underlying mechanisms and then the mechanisms of action of what we're trying to accomplish.
You can't do good experimental research without good basic science. Because essentially,
experiments are the best way to learn something. If I want to know whether one treatment is better
than another, whether a treatment works,
field experiments, randomized experiments are the best method to do that.
There's no argument statistically, if you like.
That is the best method.
But experiments are narrow. You have to know what you're studying so you can carry out your experiment in such a way
to see whether that works.
Experiments are not open.
They're not broad. They're usually very narrow. That's the way they develop. They're narrow,
but they give you very good answers to the problems that you're looking at. Well, that
means you have to have a very informed effort on what the treatments, practices, programs,
policies that you want to test experimentally. So I see these things as interacting. The problems
come when people start saying, well, you know, maybe experiments are not the best way to do it.
You know, why is experiments better? In other words, in a sense, science provides answers to
those problems. Experiments are better because of underlying statistical theory that tells us when
you use randomized design, you get
a more solid answer. There's fewer threats to the validity of that answer. There's fewer threats to
its believability. So in that context, experiments are the best way to get the answer about whether
this or that program works. It's not the only way to get information and learn. And people often get
these things mixed up. There's no battle when
it comes to statistics, if you like, no battle. The randomized experiments provide a stronger
method for reaching conclusions. However, sometimes we're looking for different types of knowledge.
Sometimes we can't use that method and we have to use others. And then we have to think about
how close those other methods get to providing believable conclusions. I like it. And learning from you and Larry and many others,
but particularly reading some of your research and your books, that's what we've tried to adapt
here. And we've tried to understand, okay, let's zoom in and zoom out. I think that's a big part of this. All right,
we're having this loss issue or this intimidation or whatever serious violence issue. Let's
understand where is it happening on a macro level. Let's move through into the micro level. Let's
interview and talk to the people involved on both sides, the red and the green, we call them,
interview and talk to the people involved on both sides, the red and the green, we call them,
and the victim and the victimizer. Let's understand. Let's look at video footage. Let's look at what's going on. Let's try and make sense of the world. Use observational studies and other
things to make sense. Now we can start to prescribe where we want to go. All right,
what really makes sense here to affect the issue, the problem,
the mechanisms that we're observing?
Yeah.
And then, like you say, maybe we've got two or three options we want to trial.
You know, 2020 is such a horrific time with the pandemic, but I'm wondering and hoping one of the things, the positive things are people understanding or think a little more
about science, but then looking at how all these options are, are all these things that you're describing, David, are taking place, all this science around the underlying mechanisms
and how, why each environment, in other words, each body is a little different and whether you
get the disease or not and how serious it is and so forth. And then, you know, how, what are we
going to do about it? And then how are people differentially responding? So back to you.
What are we going to do about it? And then how are people differentially responding? So back to you.
You've done yourself some experiments in terms of protecting products and issues of this sort.
And those experiments, I'm sure, would underlay those experiments as perhaps some qualitative work, as you note,
understand why people steal, how they choose, what the, you know, the kinds of vulnerabilities that they see in stores and other places. And I'm sure you've also looked at statistics about
where in stores you have sales, where in stores things get stolen more often. And then you say,
well, look, I'd like to see if I can prevent this. And use all that knowledge you've gained
using observational studies, using qualitative studies, using statistical studies. You take all look, I'd like to see if I can prevent this and use all that knowledge you've gained using
observational studies, using qualitative studies, using statistical studies. You take all that and
you build an experiment based on a treatment that you put together based on that. So all these
different methods provide an integration to each other. I love it. And I couldn't have said it
better. And I think that's the big part of what we're trying to get out here today, David, is just this discussion now initially. And look, science is a process. It's not a thing. It's not a religion. It's a process. But it involves a lot of different components.
They're all complementary to each other, hopefully, as you said.
Let's understand now.
Okay, let's come up with how we want to treat.
All right, but wait, there could be different dosing options.
All right, let's trial that.
But there's a logical reason we've got what we're doing and why we're going to do it this way.
And then let's see.
No, okay, that didn't seem to work or work as well. But because we've identified these things, as you said, David, before the experiment, we have some, as mad scientists, we've got some dials here.
Okay, well, I think now I can see what we want to dial instead of I have no idea what happened or why it didn't.
Excellent.
I think another thing that I wanted to ask you about, and I'll never forget, in one of my first articles that you were involved with at the Journal of Experimental
Crim, and you said, Reed, can you put a little more about the real world in here?
It's so important to understand the cost, the cooperation, the participation, all the
dirty underside of research, the real world of research.
Can you talk a little bit about that?
Because you've been involved in so many types of experiments at different scales, might have been at the micro scale. Some of your
observations I thought would be really, really neat. Well, Reid, are you asking me about the
kinds of problems you encounter when you try to carry out? Yes. Yes. The types of problems and
how we handle those and not, and so those out there trying to conduct experiments or be participants like an agency, this is okay.
It's okay.
Let's work through this.
Dear Reid, I'll tell you, sometimes when I talk about this issue with experiments, I start out with a story.
It's actually an Israeli joke, if you like.
Israeli joke, if you like. And it goes this way, that this fellow, he dies, and he goes up to God,
and God says to him, well, I'm sorry, you're not going to come to heaven. You're going to have to go to hell. But there are two choices or two alternatives. And the fellow says, well, how can
I choose? And God says, well, I'll let you visit each place. And then you'll tell me where you want
to spend eternity. And the fellow says, what do I do? And God says, go to I'll let you visit each place, and then you'll tell me where you want to spend eternity.
And the fellow says, what do I do?
And God says, go to sleep, and when you wake up, you'll be in regular hell.
And so he goes to sleep.
He wakes up, and it's fire, brimstones, devils, you know, Dante's Inferno, terrible.
And he says, God, God, and God says, what? And he says, I'd like to see the other place, Israeli hell.
Okay.
God says, okay.
So go to sleep.
You wake up in Israeli hell.
And he wakes up and it's all green rolling hills.
Israel is a very beautiful place.
People are mangling.
They're having barbecues.
And children are dancing the hora, you know, the traditional
Israeli dance. And the fellow says, God, God. And God says, what? He says, well, great. You know,
this is where I want to be, in Israeli hell for eternity. And God says, okay, wake up,
go to sleep, and you'll wake up and be in israeli hell for eternity so he wakes up and it's worse
than the previous place regular hell more demons more fire more dante's inferno and he says god
god god says what he says this is what isn't where i was yesterday and god says yesterday you were a
tourist so uh i think i think experiments are a little like that. In other words, that in statistical theory,
experiments are amazing because they give you, if the treatment group does better than the control
group, there's no reason for that other than the treatment itself, because the treatment has been
randomized. So everything else is random, but the application of treatment. But the reality is that experiments suffer from something else, which is they're inflexible to problems. In other words,
if you run an experiment, you got to think of everything beforehand. That's different than the
way traditional research is done, where you get data and then you use statistics and add new data
and you correct for the problems you had. In randomized experiments,
you can't do that. Everything has got to be set up well at the beginning so you can carry out
the experiment with integrity and so that you have relatively few violations, as few as possible,
of the experimental regimen. If you do have violations, you have a problem. There are very
few ways to correct for that. So experiments have this great advantage of
providing a really solid, believable answer to the question. But they only provide that answer
when you cover things all throughout. That means you have to be careful about the number of cases
that come into an experiment. A lot of experiments fail because they thought they'd get a lot of
cases for anonymization, and then they don't.
Experiments, you have to be careful that there isn't an overlap of treatment, a contamination,
that the treatment group gets treatment and the control group doesn't get treatment.
And you have to make sure those two things don't overlap. That sounds easy, but often in human relations, people hear about something, they know something, people get put in the wrong group, etc.
You have to be really careful about those sorts of issues.
There's also questions about the dosage involved.
I mentioned before that experiments, you have to be well prepared for it.
Because let's say that you've given the experiment a certain dosage.
Let's say it's one day a week of police presence for hotspots.
Now, let's say the reality is that you need two days to know whether police presence is going to have an effect a week of police presence for hotspots. Now, let's say the reality is that you need two days
to know whether police presence is going to have an effect a week. But you haven't tested that.
You've detected a treatment, usually one treatment, one day a week. So you better make sure the
treatment is of high enough dosage that you're going to get the kind of response you want.
Indeed, one of the things we don't do enough of is we don't experiment across dosages, often because we don't have enough
potential opportunities for experiments to identify the correct dosage. But anyway,
so I always say that running experiments is a little like the Israeli joke, sort of teasing
themselves, if you like, about what it's like to leave everyone, the tourists, having such a great
time, if you like. I should note Israel is a nice place more generally, but nonetheless, I think of it as this joke about Israeli hell, that you think, you know,
it's going to be great, and then boom, you find reality. The task is to be well prepared for
problems that come along the way. I love it. Couldn't be more relevant, and I think we've now
done, conducted over 32. I know we haven't published them all yet. That's on me. But, you know,
on the one hand, you feel like a grizzled veteran, or I do at this point. On the other hand, man,
you know, every experiment, okay, I didn't, I mean, I had no way I saw that coming. And,
but we had a really nice one laid out in these drugstore chain, a major one you would know of,
randomized selection, randomized to treatment and control.
But we actually had three different treatment versions,
but one was this display fixture that would provide protective,
you know, it was a protective device.
Well, so here we go and pre-test and then boom,
they implement the treatment in those arms and not in the other arm.
So here we go. And then we do fidelity checks. We do everything the same, as you said,
other than the treatment. And we've hopefully carefully dosed it. And we did it in three
different ways here. We had the luxury of a larger sample size. But two people in the, I would say placebo,
but in the control arm, they, during the telechecks, they had this protective fixture.
I mean, these are brand new. It turns out they had heard about them, got online, got some guy
in Canada to fabricate these things, and they put them in. Because of our checks, we knew about it,
and as you know, then we had the dilemma, do you intention to treat? Do we report as is,
because now they're diluting, or do we report them, move them over to the treatment arm? Of
course, we report it both ways. So any thoughts on that? But that's maybe a real-world example,
like you got to be kidding me.
But first of all, one thing I'd say is that maybe read after this interview, we could talk for a moment because I have an idea about something else for you.
But I think that experiments, you might say the issue with experiments is that to the
extent that there's fidelity to the experimental design is the extent to which the results
can be believed.
The closer it gets to the perfect situation, the more believable things get. The real world is full of problems. Look at
COVID. I have two experiments going on in the field, and boy, have those changed as a result of
COVID, both in terms of data collection and in terms of the treatments involved. So the real
world, when you're doing field experiments, as you know, the real world can create problems. By the way, there are some statistical solutions,
like in the case you have, I don't think you have enough of a crossover to create really
serious problems. But there was a fellow named, there is a fellow named Angrist, an economist,
who developed an idea using instrumental variables to deal with the problems you have. So
there was an article in
the Journal of Experimental Criminology about that when I was editor. We can talk about it
another time. But you can often try to use some statistical solutions. But experiments are a bit
inflexible. What do you do? The example you gave, you have two people cross over to another group.
What do you do now? You can't just control out for that. It's not very
easily angerous. It's just one method. You're kind of stuck. Either you treat them as treatment cases
or control cases. And of course, the general consensus is you treat them as they were
randomized. Irrespectable what happened to them. That's the more conservative approach.
And we took a gamble and we got it through reporting it both ways. But,
and I can only imagine, yeah, during the pandemic, we've, we've had to do things virtually or get
people to point out things on life-size boards while we're behind plexiglass in the field,
but you keep moving and adjusting and adapting. So this is a great conversation. And I really wanted to touch
on that real world aspect. But I want to ask you too, a little bit about say law enforcement
agencies. I have a little more cooperation, not a lot more from these companies that we work with,
if you will, rather than an agency. And we're again, working in a micro environment, so we have somewhat less conflict than if we're ranging across precincts or different geospatial metrics or whatever areas.
But how do you deal get some of that,
the results that they really need and we all need as citizens.
Yeah, I think that I'll start with policing and then go beyond.
But I think in policing, there's, you know, a number of us have suggested the importance of police getting training and education that makes them understand why experimentation and why evidence is so important to what they do.
We mentioned medicine before.
One of the advantages of medicine is that all the medical professionals have to go to medical school, and those schools are the people doing research.
So the idea of evidence gets a great deal of support in their
education. And it's also the case that they learn why, for example, experiments are better than
other sorts of approaches for answering questions about what works. So there's a long way to go in
medicine. John Shepard, who received the Stockholm Prize for his work on trying to create or develop bottles that would cause fewer facial injuries.
He's been developing over years in the UK programs for integrating science, if you like, into the education of police officers.
I think it's something slowly happening.
There's a lot of resistance in the U.S. Some of that comes from
unions who want to say that, you know, we didn't need a college degree when we became police
officers. And so there's resistance to educational requirements. But educational requirements
themselves won't necessarily do it because that education has to be an education that emphasizes
the importance of science and evidence-based policing.
So it's not just getting them into college, it's getting them into programs which those things are taught and learned.
In other areas, I think a lot depends on the area and who you're dealing with.
When you're dealing in some areas where people's educations have graduate school educations, et cetera,
they have some contact with research, that can make it easier. Unfortunately, many people in business and other areas may or
may not have those sorts of backgrounds. I would hope in many of the programs they went to, they
would. Part of it, I think, is in the policing, in the policing area, when I first, the first experiment I ran with
Larry Sherman back in 1995, it was like a randomized experiment in policing. It was a crazy, radical
idea. And we had Tony Boza, who was a police chief that was crazy and radical. I don't mean in a bad
way. I mean, we thought outside the box. I think now there are more and more police that are
understanding the importance of evidence and the importance of using science and what they do.
So things are changing. There's still a long way to go. It sounds like you're also having some pushback.
Yeah, maybe it's human nature. I like science when it's something I agree with and not when it's the opposite,
not so much, but, um, yeah, by and large, there is some agreement.
The problem we run into also is this, the six Sigma and some of the, uh, some of these,
uh, you know, efficiency, uh, exercises that are out there or processes that are out there,
um, that really aren't very scientific.
You know, they do force you to, to do close examination and try and better understand mechanisms and
what, you know, the dynamics that are occurring.
But when it comes to better sampling and, as you know, measurement and analytics, not
so much.
And so sometimes we'll have that, well, here you go, here are your test scores.
Wait, wait a minute, what? So it's that kind of thing. It's not necessarily battling the people we work with the equivalent of the police chiefs, but rather people embedded in the organization that it's their role to assess and analyze, you know, changes that you're going to make. I think it was key. An economist, a famous economist once said,
the reason why policymakers don't like evidence is because it makes reaching conclusions more
difficult. In other words, if you can just reach conclusions on the basis of your stomach feelings,
that's pretty easy. You can move pretty quickly and evidence might hold you up.
I think here we have some elements that relate both to the researchers and practitioners. You know, we have to develop
evidence in ways that are useful and timely, right? That doesn't always happen. And we have to
provide a good case for why evidence is important. You know, one case I've always thought of is that
many police practices that are very exciting in the beginning and brought out by specific chiefs, you know,
get a lot of publicity and then a lot of people adopt them. And then scholars look at it and then
they find it really didn't work. And so it's sort of a circular type of thing. So I always say, wouldn't you rather know earlier on what some of those outcomes are
rather than sort of go all in and find out that it really doesn't do very much?
This takes a commitment from practitioners because, you know, another story,
Hubert Williams, who passed away recently, is a great guy.
And he and I went to New York City when I was working with the Police Foundation.
And it was during the beginning of the Comstead area after Bratton.
And we went to the commissioner and we tried to convince him to allow us to do a study of Comstead, of the effectiveness of Comstead.
to allow us to do a study of Comstat, of the effectiveness of Comstat.
And I explained to him that the data they were using so far was not very convincing.
The, for example, the murder data, because other states are having similar declines.
And at that point, the commissioner turned around to me and he said, David, you can only bring me bad news.
I mean, he was, Comstat was on the cover of Time magazine, the New York Times, the Wall Street Journal.
Everybody knew that Comstead was a great success.
So from his view, why would he do research on it?
It doesn't help him, right?
Everybody loves him, right?
So and I thought about that afterwards.
And I thought that part of what we have to do is make practitioners understand they have, if you like, a moral responsibility, especially in policing.
And business may be a little different, but they have a moral responsibility.
We wouldn't like it if a hospital that was giving treatment for breast cancer refused to do research to see whether those treatments actually worked, right?
We would think that was wrong.
That would really worry us, right?
I think it's the same thing in policing.
That kind of attitude, if you can only bring me bad news, is something we really want to avoid.
Now, it's also the case, besides the moral element, there is an economic and an efficiency
element. If you use evaluation science, you will be in a better position to make the case that the
treatments work if they do work.
And, you know, in the end, I've seen this in different countries, that the ability to bring evidence, also today, many policymakers, many elected officials, they understand the
importance of evaluation of science, of knowing whether something would work, of seeing evidence.
So if you take the lead in that as a practitioner,
it gives you an advantage, I think, in the long run.
Now, those are all great insights.
And I like that, you know, that some people get wedded to these ideas.
They don't, things are fixed in place in time.
But, you know, I think we all know too, even the news cycle.
Okay, you know, you've got some good pub out there, some publicity, but I
don't think anybody's going to really remember it.
Let's keep going.
But I think some of the things we've been doing is, hey, we actually really are here
to work with you to help you improve the outcomes.
We want you, not us, to be the hero if we can help do that.
And we also try and identify and groom and form, if you will, a champion,
an inside champion for the idea and include them in planning, help them understand the logic.
Here's some other options. And they'll do these small tests with a handful of places,
those with extreme values, which regression, the mean or whatever issues and confounding
all around.
But normally, you can win them over.
They're more and more interested.
I think the other thing is they're more interested if they're not paying for the exercise itself.
In our case, we're blessed in that the solution providers, the treatment developers and implementers, they have budgets now.
And more and more, the decision makers are saying, you know, I kind of want this good housekeeping seal of approval. I'd like to do something differently. It's not just me. So interesting dynamics. Very helpful.
Thank you for listening to our podcast discussion with David Weisberg.
We will continue our conversation in a second part with our featured guest next week. Please
stay tuned. Thanks for listening to the Crime Science Podcast presented by the Loss Prevention
Research Council and sponsored by Bosch Security.
If you enjoyed today's episode, you can find more crime science episodes and valuable information at lpresearch.org.
The content provided in the Crime Science Podcast is for informational purposes only and is not a substitute for legal, financial, or other advice.
Views expressed by guests of the Crime Science Podcast are those of the authors and do not reflect the opinions or positions of the Loss Prevention Research Council.