Trending Topics

Demystifying artificial intelligence: The fundamental technologies and how they can impact modern policing

Police leaders must have a clear strategy that includes stakeholder engagement, training and communication before integrating AI into their operations

AI ethics or AI Law concept. Developing AI codes of ethics. Compliance, regulation, standard , business policy and responsibility for guarding against unintended bias in machine learning algorithms.

AI ethics or AI Law concept. Developing AI codes of ethics. Compliance, regulation, standard , business policy and responsibility for guarding against unintended bias in machine learning algorithms.

Suriya Phosri/Getty Images

Artificial intelligence (AI) has the power to transform police organizations by enhancing their efficiency and effectiveness. But failing to set expectations and understand how this technology will work within the context of policing could easily turn a valuable asset into a hindrance. Additionally, failure to address ethical concerns such as privacy, potential bias and accountability can exacerbate tensions and fuel mistrust with both the rank and file and the community, creating issues implementing AI technology. Therefore, it is imperative that police leaders have a clear strategy that includes stakeholder engagement, training and communication throughout the research, development and acquisition process and before integrating AI into their operations.

What is artificial intelligence?

Artificial intelligence encompasses a broad range of technologies and techniques that enable machines to perform tasks that traditionally require human intelligence. The evolution of AI has spanned over two decades, marked recently by significant advances and paradigm shifts in its capabilities. However, before acquiring an AI-assisted product, tool or platform for a police agency, it is important to have a basic understanding of the fundamental technologies that underlie most AI-related programs, and how they can impact police organizations.

Machine learning (ML)

Machine learning (ML) is a powerful tool to analyze vast amounts of data and extract actionable insights for enhancing policing. A key advantage of ML is its ability to automate repetitive tasks and augment the human decision-making process. ML algorithms can also enable computers to learn from data and make predictions based on a comprehensive review of historical data over long periods of time.

Predictive policing and crime pattern recognition

One prominent use case in ML is predictive policing, where AI algorithms can assist police agencies in optimizing resource allocation and strategic decision-making through advanced analytics and ML techniques that analyze historical crime data to identify patterns and trends. This can enable police agencies to forecast crime hotspots, allocate personnel and patrol routes and prioritize response efforts based on data-driven insights. By leveraging these AI-driven predictive models, it is believed police agencies might even prevent crimes before they occur. [1,2]

Predictive policing and crime pattern recognition have also come under some scrutiny due to concerns surrounding their effectiveness, fairness and potential for reinforcing existing biases within law enforcement practices. Critics argue that predictive policing and crime pattern algorithms may not always produce accurate or reliable predictions and may perpetuate and exacerbate racial and socioeconomic disparities in policing outcomes, as they often rely on historical crime data that may reflect biased policing practices and systemic inequalities and lead to skepticism about their effectiveness in reducing crime. [2,3]

Suspect identification

ML also plays a pivotal role, offering powerful tools for analyzing and processing vast amounts of data from diverse sources of information, including surveillance footage, witness testimonies and criminal databases, to identify patterns and correlations that may lead to suspect identification. [4]

Optical character recognition (OCR)

OCR involves the conversion of scanned images or handwritten documents into editable and searchable text. OCR algorithms can also analyze the visual patterns of characters and symbols in images, recognize them and convert them into machine-readable text. This technology is widely used in document digitization, automated data entry and text extraction from images. In policing, OCR can facilitate the digitization and analysis of paper-based documents, such as police reports, witness statements and identification documents, improving efficiency and information accessibility.

Natural language processing (NLP)

NLP encompasses a broad set of techniques for processing and analyzing natural language data, including language models (LMs) and language learning models (LLMs). NLP algorithms can perform tasks such as sentiment analysis, named entity recognition, language translation and text summarization.

Language models (LMs) and language learning models (LLMs)

LMs are statistical models that predict the probability of a sequence of words or characters in a given context. These models are trained on large datasets of text and learn to generate coherent and contextually relevant insights based on patterns and structures present in the training data. LMs can be employed by police organizations to streamline and enhance various aspects of their operations, including investigative processes and intelligence analysis. One example is the ability to summarize large volumes of police reports, witness statements and legal documents, extracting key information and identifying relevant details, enabling the reader to quickly grasp the essential points without having to read through entire documents. [5]

LLMs encompass an even broader category of learning and understanding human language through interaction with data and feedback mechanisms. LLMs typically involve more sophisticated algorithms and techniques for learning from linguistic input and encompass a wide range of approaches, including deep learning, reinforcement learning and unsupervised learning, aimed at understanding, generating and processing natural language in more complex ways.

For example, LLMs can assist police agencies with sentiment analysis and threat detection in online communication channels by analyzing the sentiments expressed in social media posts, forums and online discussions to identify and flag potential threats or indicators of criminal activity and help law enforcement agencies prioritize and investigate more efficiently. LLMs also empower police agencies to leverage this data for early detection and response to emerging threats, enhancing their overall situational awareness and ability to mitigate risks effectively.

Computer vision (CV)

CV encompasses techniques and algorithms designed to extract meaningful insights from images and videos, enabling machines to perceive, analyze and interpret visual data in a manner akin to human vision. By leveraging various image processing techniques, ML algorithms and deep neural networks, CV can perform a wide range of tasks, including facial and object recognition, image classification and augmented reality. [6,7]

Police1’s VISION platform empowers police departments to navigate the complexities of the digital age

Facial recognition (FR)

AI-powered FR systems have become increasingly prevalent in policing for suspect identification and public safety purposes. These systems typically utilize CV algorithms to analyze facial features and match them against databases of known individuals, aiding in the apprehension of suspects and enhancing surveillance capabilities. [8] FR can also be integrated into real-time surveillance systems to automatically alert authorities when a specific facial match is detected, enhancing proactive policing efforts and public safety measures.

FR in policing has also raised concerns regarding privacy, civil liberties and the potential for misuse and abuse. One prominent example is Clearview AI, a FR company that garnered widespread controversy for its expansive database of facial images scraped from social media platforms and other online sources. [9] Critics argue that the indiscriminate use of FR, exemplified by Clearview AI, poses serious risks to individual privacy and fundamental rights because the collection and storage of vast amounts of facial data without individuals’ consent could lead to unauthorized surveillance, tracking and profiling. [10] Moreover, early FR algorithms exhibited racial and gender biases, leading to disproportionately high error rates for people of color and women, [11] which triggered fears that other perceived disparities in policing would be exacerbated and undermine community trust, ultimately jeopardizing well-intentioned public safety objectives. [12]

However, since these initial missteps, responsible FR companies have trained their algorithms on multiple demographic groups and women, and routinely deploy multiple algorithms for each identification simultaneously, dramatically increasing the accuracy of positive matches. Also, most companies have stopped collecting and ingesting photos from open sources, relying solely on CJIS-compliant databases provided by the client police agency to conduct FR comparisons.

The National Institute of Standards and Technology (NIST) also conducts free testing of facial recognition algorithms and makes a list of test results available on their website. Agencies considering the use of facial recognition technology can and should require that vendors have their algorithms tested by NIST or provide evidence they have already been tested and there are published results before buying this technology.

Object recognition (OR)

CV also uses a wide range of techniques and algorithms to detect and recognize objects of interest, regardless of their scale, orientation or background context. These algorithms typically leverage features such as color, texture, shape and spatial relationships to differentiate between different objects and assign them to predefined categories or classes. [6]

Police agencies can use OR for surveillance and forensic analysis by automatically detecting and identifying objects of interest in video footage or crime scene imagery. For instance, in a forensic investigation, OR can assist investigators to identify and analyze weapons, vehicles or other pertinent objects captured in surveillance videos or images, enabling police to quickly identify relevant evidence, track suspects and reconstruct crime scenes, thereby expediting investigations and aiding the apprehension of perpetrators. Like FR, OR can also be integrated into real-time surveillance systems to automatically alert authorities when specific objects, such as firearms or stolen vehicles, are detected.

What AI is not

AI can serve as a powerful tool to assist officers in their decision-making processes, but despite its vast capabilities, AI cannot replace human judgment and decision-making and should only be used to augment and enhance human expertise. [13] It’s important to recognize that AI by itself, while powerful, lacks the nuanced understanding and awareness inherent in human judgment, such as empathy, moral reasoning and situational awareness. Therefore, the use of AI should only be viewed as a complementary tool rather than a replacement for human judgment in policing.

While AI algorithms strive for objectivity, they are also susceptible to inheriting biases present in the data used for training. This can manifest in various forms, including racial, gender or socioeconomic biases that could be introduced into policing practices. [14] Thus, it is crucial for police agencies to implement rigorous processes for data collection, curation and validation as well as ongoing monitoring of AI algorithms to minimize bias and ensure accountability and fairness in decision-making processes. [3]

AI education in police agencies

Proper education and training within police organizations is essential to ensure the effective and responsible deployment of AI technologies in policing practices. AI education should include understanding both the capabilities and limitations of AI, ethical considerations and a solid understanding of how AI technologies will be used. [13] Training programs should also provide insights into how AI learns from data, makes decisions and interacts with human users, enabling officers to comprehend the underlying mechanisms driving AI-driven analytics tools and applications to ensure that human decisions made based on the use of AI are in context with human assessments.

Lastly, education on the ethical implications of AI is crucial to ensure its responsible and accountable use in policing. Officers should be equipped with knowledge about potential biases, fairness considerations and privacy concerns that are associated with AI algorithms and systems as well as the importance of upholding legal standards, avoiding discriminatory practices and safeguarding individual rights and liberties when deploying AI within law enforcement contexts. [15]

Conclusion

AI holds immense potential to improve the delivery of police services. However, there is a lot of misinformation circulating about AI. Therefore, police leaders must educate themselves to recognize its limitations and ensure that AI is used to complement rather than replace human judgment in police agencies. It is critical to ensure all stakeholders are adequately trained, so commonsense expectations can be set and the groundwork for easily followed policies and practices that govern its use will follow in an ethical, fair and just manner.

In the end, AI is here to stay, and policy-makers, police agencies and technology companies must work collaboratively to establish clear guidelines, standards and safeguards to mitigate bias and protect individual rights. By fostering a culture of collaboration, ethical awareness and responsibility, police organizations can mitigate these risks and build trust with the communities they serve to oversee the deployment of AI-driven strategies that are fair, equitable and enhance public safety.

References

1. Perry WL, McInnis B, Price CC, et al. Predictive policing: The role of crime forecasting in law enforcement operations. RAND Corporation. 2013.

2. Mohler GO, Short MB, Malinowski S, et al. Randomized controlled field trials of predictive policing. J Am Stat Assoc. 2016.

3. Lum K, Isaac W. To predict and serve? Significance. 2016.

4. Sinha A, Balasubramanian V, Sebe N. An overview of biometric recognition: Modalities, features, and challenges. ACM Comput Surv. 2019.

5. Allahyari M, Pouriyeh S, Assefi M, et al. Text summarization techniques: A brief survey. Arxiv. 2017.

6. Russell S, Norvig P. Artificial intelligence: A modern approach. 3rd ed. Pearson; 2021.

7. Szeliski R. Computer vision: Algorithms and applications. Springer; 2010.

8. Introna LD, Nissenbaum H. Facial recognition technology: A survey of policy and implementation issues. Lancaster University Management School. 2010.

9. Cimpanu C. Clearview AI, the company that’s scraped billions of photos from social media, is now working with ICE. The Record. 2020.

10. Wired. Clearview AI says its facial recognition software has seen a spike in use. 2020.

11. Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on Fairness, Accountability, and Transparency. 2018.

12. Garvie C, Bedoya A, Frankle J. The perpetual line-up: Unregulated police face recognition in America. Georgetown Law Center on Privacy & Technology. 2016. Available from: www.perpetuallineup.org

13. Ferguson AG, Matheson DJ. Policing, technology, and civil rights. Annu Rev Criminol. 2021.

14. Corbett-Davies S, Pierson E, Feller A, et al. Algorithmic decision making and the cost of fairness. ACM SIGKDD Int Conf Knowl Discov Data Min. 2017.

15. Ensign D, Friedler SA, Neville S, et al. Runaway feedback loops in predictive policing. Arxiv. 2017.

NEXT: AI’s role in redefining policing: A 10-year projection

Dr. Lestrange is the Executive Vice President and Chief Strategy and Innovation Officer for METIS Intelligence, North America. METIS is a leading provider of AI driven intelligence solutions to law enforcement, public safety, and security agencies. He is also a founding Research Fellow at the newly launched Future Policing Institute, Center on Policing and Artificial Intelligence (COP-AI).



Dr. Joseph J. Lestrange served over three decades as a commissioned federal law enforcement officer, serving in multiple international, national, regional, and local leadership roles. In his last year of government service, Dr. Lestrange was appointed as Senior Agency Official to the U.S. Council on Transnational Organized Crime - Strategic Division, created by President Biden via Executive Order to develop “whole of government” solutions to complex public safety and national security challenges.



He retired in June 2022 as the Division Chief of Homeland Security Investigations (HSI) National Headquarters, Public Safety & National Security Division, where he provided executive oversight over all budget formulation, stakeholder engagement, resource development, strategic planning, and case coordination for multiple law enforcement interdiction, investigation and intelligence units, agency programs, federal task forces and inter-agency operations initiatives.



To prepare future leaders, Dr. Lestrange is a Course Developer and Adjunct Professor in Leadership, Organizational Theory and Design for Tiffin University’s Ph.D. program in Global Leadership and Change; an Adjunct Professor at Indiana Institute of Technology’s, College of Business and Continuing Professional Studies for MBA and undergraduate courses in Leading Strategy, Sustainability, Homeland Security, and Emergency Management. He has supervised PhD dissertations in the areas of police recruitment and retention, adaptive leadership and leading multi-generational workforces.