5 steps to true AI in law enforcement

Systems that can justify their results are crucial for policing purposes

By Bob Griffin

One of the things I believe is extremely important in educating people about the capabilities of artificial intelligence (AI), not just in law enforcement but also in the wider community, is to understand that to be truly artificially intelligent five things need to happen. These are what I call the five steps to true AI.

While all these steps are important, from my perspective, one step is most critical: justification. Specifically, how did the AI reach its conclusion and what specific steps did it go through to get there. While I believe the justification layer is important in every application of AI, it becomes more critical when it comes to law enforcement, the Department of Defence and military intelligence.

The application of AI in the policing community is ground-breaking.
The application of AI in the policing community is ground-breaking. (Photo/Getty Images)


A good way to understand and to put justification into perspective is to share a real societal example. An area that’s been getting a lot of bad publicity lately is the use of facial recognition technology and the concern that many facial recognition capabilities on the market come with inherent bias.

This was further reinforced by a 2018 study by Joy Buolamwini, a Massachusetts Institute of Technology grad student at the time, where she posited that there are gender and skin-type biases in commercial artificial-intelligence systems with an error rate of 0.8% for light-skinned men, 34.7% for dark-skinned women as sighted examples.

Buolamwini went on to found the Algorithmic Justice League to raise public awareness about the impacts of AI. In an opinion piece published in The New York Times, she wrote, “Everyday people should support lawmakers, activists and public-interest technologists in demanding transparency, equity and accountability in the use of artificial intelligence that governs our lives. Facial recognition is increasingly penetrating our lives, but there is still time to prevent it from worsening social inequalities. To do that, we must face the coded gaze.”

I agree with many of the points Buolamwini makes, but what I worry about is that we will end up throwing the baby out with the bathwater and stop utilizing important technology when there are ways to solve, for example, the bias concern. Much of the bias associated with facial recognition technology comes from the learning or training model environment. If you train the AI on a set of images that are primarily made up of a white demographic, the system will likely reflect that bias. The same holds true if you train it on a population that has a preponderance of non-white individuals. 

This is why justification is such a must-have. If you can demonstrate and document the steps the AI engine went through to explain how it made the decision, reached its conclusions and narrowed its choices to a subset, the process becomes fully transparent. That means an individual can stand up in court or during the discovery process and confidently outline all the steps the engine went through to identify, for example, an individual in a particular incident.

For AI to be widely accepted and adopted, you need to be able to document the decision cycle. Gone are the days of black-box architectures and non-auditable functions that just spit out a “trust me” result. With more scrutiny being put on policing and increased community demands for transparency, this requirement must be a priority.

But to get to the justification layer, several things need to happen:


In the AI/machine learning (ML) world, we hear a lot about supervised or semi-supervised, unsupervised and reinforced learning; however, discovery is that process of learning. True discovery is very much like how a child learns.

We don’t have to explain to a child what a table is or what a glass is. We may have to teach them that this object is called “glass” or “table,” but instinctively they understand that these are separate objects. As they explore their environment, they discover more; a glass stays when you place it on top of a table; a glass falls when you place it against the wall. That’s discovery, and the more accurate an application is in discovery the more effective it is.


What is likely to come next you can think of as enhanced pattern analysis. Typically, the more robust the discovery the more accurate the prediction.

Imagine if discovery tells you that the number one stolen vehicle is a Ford F-150, more car thefts occur from mall parking lots, incidents increase toward the latter half of the month with more incidents reported between 8 – 11 p.m., thefts increase on moonless nights and there is a significant increase during light to moderate rain activity. Then, the prediction layer can correlate and connect disparate data and likely produce an accurate or, at minimum, a highly probable event calendar.


As previously stated, this to me is the single most important layer in any AI application. If the application can’t demonstrate exactly how it reached its conclusions, it’s not really artificial intelligence.

In the policing world, this becomes essential; it is the AI version of chain of custody. These are the steps, and this is the conclusion. It also must be auditable and in a vernacular that is human readable.


The value of an AI application is that it can take action from what it has discovered, predicted and justified even if the action taken is a decision not to act. In many cases, that action could be a notification (i.e., alarm, tripwire, etc.); in some cases, the action could be human intervention in the decision cycle (i.e., kinetic action). In other cases, it might be some form of predetermined action (i.e., based on this, do this). As with any network science (relationships of relationships, cause and effect) discipline, each action has a reaction that leads to the final requirement.


This is a lifecycle. As the engine goes through the DPJA (discover, prediction, justification, action) phases, the engine must learn from each outcome, and based on those learnings, it has to determine what adjustments or tunings it needs to make.

Because this is a continuous approach, each cycle uncovers new outcomes and feeds those back into the engine to help amplify new signals or discover new insights, which in turn produces another round of each phase mentioned above, including learning.


The application of AI in the policing community is ground-breaking. AI can do incredible things. As more organizations embrace the technology and as more developers focus on applying AI to this community, the advances are going to be exponential, and the results will be game-changing. Whether that’s utilizing the technology in the areas of vision AI, deepfake analysis, identity consolidation/deconfliction, deep-dive analytics, skills augmentation, relationship analytics or data enrichment (as examples) the use cases for AI are ubiquitous.

About the author

Robert Griffin is currently the managing partner for DVI Equity Partners a Private Equity Investment arm of Diamond Ventures and a board director at Siren. As the managing partner, he focuses on technology investments that are concentrated on delivering disruptive or disintermediating technology in areas of B2B, critical infrastructure, national security, and emerging trends. Mr. Griffin has been a key player and successful serial entrepreneur in the software and services industry for more than 40 years. In October 2011 he facilitated the sale of his company, i2, to IBM into their Industry Solutions, Software Product Group, where he remained as the General Manager for the Safer Planet and Smarter Cities brand until February 2017.

Recommended for you

Copyright © 2023 Police1. All rights reserved.