Research: The operationalization of bodycam data
Washington State University researchers are working with police departments to objectively review videos to benchmark officer performance and inform training
This article appeared in Police1’s 2022 guide to body-worn cameras. Click here to download the guide.
An agency’s body-worn camera video contains multiple data points that can be operationalized to benchmark officer performance and inform training. Tapping into that wealth of knowledge is the mission of David A. Makin, Ph.D., an associate professor in criminal justice and criminology at Washington State University and director of WSU’s Complex Social Interactions Lab.
Through data analytics and machine learning, Makin and his team code and catalog key variables in bodycam videos associated with a range of outcomes as specified by the agencies participating in the research. Importantly, the work undertaken in the lab captures situational and environmental factors such as the geographic location, ambient noise level, time of day, and the presence and actions taken by bystanders to better contextualize and therefore better understand interactions between police and the community.
Recently, WSU’s research team passed a significant milestone of 20,000 hours (nearly 120 weeks’ worth) of analyzed footage. I sat down with Dr. Makin to discuss how this research can contribute toward improving police-community interactions and create data-driven solutions for enhancing situational awareness, officer safety and de-escalation.
Can you describe the goals of your research?
Our work is truly about helping agencies see body-worn cameras as less an expense or unfunded mandate and instead, as an investment to progress training beyond the classroom, mitigate risk and improve performance.
We are fielding a few different studies and are in Phase I of a project developing a training repository to support the FTO process and CIT training. For the latter, we screened samples of BWC footage for observable signs of cognitive impairment.
The goals of the project are to examine how well officers identify the behavioral cues and explore the factors associated with the detection of these cues. For example, do CIT-trained officers perform better at identifying those behavioral cues?
We are also fielding a multi-agency project examining differential treatment in traffic stops, which is an extension of work originally supported by the WA State Traffic Safety Commission. In this study, we are benchmarking the core components of procedural justice – importantly these are objective measures. For example, how frequently do officers state the reason for the stop? We spent several weeks working with each agency on developing an instrument that best represented procedural justice at the objective level.
There is a range of other projects, though at the center of them all is assisting agencies in maximizing their BWC program.
When an agency approaches you, what is the first step in the process?
A lot of what agencies want has to do with benchmarking, so the first thing that takes place is establishing an agency’s benchmarks. So, for example, when it comes to analyzing officer use of force, our goal is not to add a label to say whether an incident is right or wrong but to objectively model an agency's use of force. We look at every interaction they have recorded to analyze things like:
- What is the first point of contact?
- How quickly is force used?
- What is the duration of that applied force?
Once you have all that data, you start to see patterns where you can say, “OK, officers appear to be quicker here or slower here to use force.” Again, we are not putting labels on it to say whether an interaction is good or bad but to provide data so agencies can be informed to ask questions such as: “What is too fast when using force?” or “What is too slow when using force?” It allows agencies to look at any patterns and identify what are problematic areas, as well as what officers are doing right as dictated by policy.
Another example would be the use of directed profanity. If you were to hear officers use a phrase such as “Like, ain't that some shit,” some people might have a problem with that, but others might say it's kind of how we communicate. However, if an officer says, “Why are you being an asshole?” that's directed profanity. We have coding for that so if we review bodycam of traffic stops or other types of interactions, we can give data back to an agency that shows a random sample of those interactions and what percent of the time officers are using directed profanity.
If the agencies are willing, then they can start to learn from each other to say, “Your rates of directed profanity are zero but ours are 8%, What are you doing differently to achieve this?”
We also code for what the person of interest or bystanders are doing because policing is complex and human interactions can be complex. So, then we can give context to when an officer may be using directed profanity.
Can you talk about the work you are doing to use bodycam video data to improve field officer training?
We're building a repository for a smaller agency that doesn't get a lot of certain types of calls. They would like their trainees when going through the FTO process to be exposed to certain interactions, such as crisis contacts or domestic violence or interactions with a certain level of intensity. The easy part is that we have this great database we can mine but of course, the more challenging part is identifying and understanding what makes a police-citizen interaction good. That is, you must sit down with trainers, which is what we're going to do in the fall. What I've found is that the hardest thing for an agency is to quantify what makes an interaction “good.” The long-term goal is to work with police trainers and experts who can analyze the interactions.
We've spent so much time talking about accountability in the context of bodycam video as a record when something goes bad, but there’s another way to look at this technology: It is also a record of exemplary behavior. We really should be learning from those exemplars as well.
IS data and information just available for the agencies who signed up with you or can any agency view it?
For right now, only the agencies who partner with us can see the data as we adhere to strict confidentiality guidelines as a research lab. But if agencies want our code books, we are happy to share them.
Could the data you gather be used alongside early intervention programs where the review process could occur in real time?
We have a provisional patent for software that could accomplish that. It is called QPI, which stands for “quantifying police interactions.” It's built on a semi-automated machine learning platform and the goal of the software is to do exactly what we do in the lab but to allow agencies to be able to do it. They would be able to use the software to go through and objectively identify what officers should be doing, or in some cases not doing. But our software isn’t designed to work as a “gotcha,” it is built on the foundations of evidence-based practices, so you could identify when interventions were needed. Importantly, we are offering QPI as software-as-a-service ensuring the price point allows for the maintenance of the software keeping the cost as low as possible.
At the early intervention level, here’s an example. We did a project for an agency’s domestic violence unit where the unit’s sergeant wanted to analyze how, or if, officers were using trauma-informed practices. We reviewed the video and provided an objective analysis of how often officers were referring victims for services, explaining the next steps, etc. Law enforcement spends a lot of money on training, so the goal is to find out if it is working.
How would agencies use that data for training?
Let’s take directed profanity. You can have a random review and then sit down and talk to officers. Maybe you just have a conversation at every shift briefing around being mindful of how officers speak to people.
Or if you train officers on procedural justice where there are x number of things an agency expects officers to do, you can screen those interactions say on traffic stops and get back data as to how often your officers are doing what we would call key performance indicators (KPIs). Say you are measuring eight things, you could find the “hit rate” on those and then look at how those KPIs vary by the race or gender of the driver. You can even do it based on location to see if there is a differential effect based on the neighborhood.
Any closing thoughts?
We’ve essentially already done 20,000 hours of ride-alongs and so can look at interactions to see all the different ways they've been handled and start to isolate what works and what doesn't work.
The big one for me is de-escalation. What does de-escalation objectively look like? You probably would get 100 ways that different trainers train de-escalation, but what we can start identifying is whether those techniques are effective. From the video reviews we have conducted, I would say it is not so much the de-escalation, as it's the non-escalation. Like, don't do these things. Like, don't ever say “Because I told you so.” Don't threaten people with arrest if they don't listen to you.
The data can show officers that we've been telling you not to do these things and here is what happens when you do these things. Then it's not just anecdotes. It's not just the trainer saying this or that. It's using objective data to say don't do these things or do these things and the trainer is able to engage the video footage. Agencies need to be learning from this footage.
How can agencies reach you to find out more about your work?
Our website is the best way to reach us. It has our contact information and a FAQ page for agencies.
NEXT: Download Police1’s 2022 guide to body-worn cameras