Key takeaways
- How AI helps police departments avoid overpolicing and underpolicing: Artificial intelligence enables smarter patrol by analyzing real-time crime data to guide officer deployment where it’s most needed.
- Why police still need human judgment in predictive policing: Even with advanced analytics, law enforcement officers must interpret AI insights to avoid blind enforcement and maintain community trust.
- The importance of transparency in AI-driven law enforcement: Public confidence in AI-assisted policing depends on clear policies, open communication and accountability for how technology is used.
- Using AI for crime prevention, not just enforcement: AI should guide holistic strategies that include mental health support, community outreach and early intervention—not just arrests.
- Achieving the “Goldilocks zone” in patrol planning: The right balance of police presence can reduce crime and build trust — AI combined with officer experience makes that possible.
By Lieutenant Joe Bailey
Policing in the United States is at a crossroads on how to deploy resources and respond to crime. Past practices have been criticized for both underpolicing and overpolicing. This has led to the demand for smarter, data-driven solutions for crime prevention and reduction. Traditional approaches like area saturation and zero-tolerance enforcement have resulted in arrests and seizures of weapons and contraband, but at the cost of eroding public trust. Influenced by political and social pressure, some police departments have responded by reducing the level of proactive policing. This has led to rising crime rates and the perception that crime is rampant everywhere.
Whether or not this perception reflects reality, balancing proactive policing with community trust is key to ensuring that short-term victories in arrests aren’t outweighed by a loss of confidence in policing. Artificial intelligence (AI) has the potential to achieve this balance by integrating real-time data and predictive analytics to optimize police deployments. [1] AI can integrate data across multiple sources with geospatial mapping and real-time intelligence to improve how officers are deployed to prevent crime or respond more rapidly when it occurs. AI tools like facial recognition and crime pattern analysis are already being used to improve efficiency. [2] For this approach to be successful, agencies must ensure transparency in its implementation. They must address concerns about a disproportionate algorithmic impact beyond mere bias and maintain human oversight over police actions. The challenge is not just about adopting new technology — it’s ensuring it builds public trust and supports, rather than replaces, good policing judgment.
How we got here
To address contemporary policing challenges, we need to examine the successes and failures of crime reduction strategies of the past. In 1982, the broken windows theory was introduced, emphasizing that addressing small crimes helps prevent larger ones. [3] This strategy led to other proactive policing efforts like the Kansas City Gun Experiment, which took place over a 29-week period from July 1991 to January 1992. It brought down gun crime by 49% in a high-crime area. [4] This marked a shift from traditional policing to data-validated strategies, reflecting the beginning of location-based crime mapping and intervention. [5]
However, applying these strategies indiscriminately led to unintended consequences. In New York City, police intervention was necessary to address rising crime, leading to the implementation of NYPD’s stop-and-frisk policy. [6] This was effective in uncovering contraband and illegal drugs but was widely criticized for violating civil rights and disproportionately targeting minority communities. Stop-and-frisk wasn’t data-driven, and while there’s a push to use data to justify police interventions, data-driven policing still has unresolved challenges.
Critics argue that data-driven policing can reinforce racial and economic disparities if bias is present in the enforcement data. [7] The Kansas City Gun Project’s saturation model was successful in reducing gun crime, but it did so by increasing police encounters with citizens who weren’t breaking the law — these contacts can create community hostility.
Public dissatisfaction with the police intensified during the COVID-19 pandemic and the nationwide protests following George Floyd’s death in 2020. In response, officers had higher resignation rates and shifted from proactive stops to primarily addressing reported crimes amid growing scrutiny of use-of-force incidents [8] — the telltale signs of de-policing. This hands-off approach led to higher crime rates and a sense that lawlessness was being ignored, contributing to a surge in violent crime nationally in 2020 and 2021. [9] In 2025, however, public sentiment began to shift again. A 2024 Harris Poll showed that 75% of Americans hold a favorable view of police, with demands for more intervention — but with clear conditions for fairness and accountability. [10] As these attitudes shift, the challenge is to deploy officers effectively while maintaining public trust.
| WATCH: Peter Moskos shares lessons from a policing revolution that cut murders by more than 1,500 in a decade
Where we are now and what’s missing
AI-assisted policing is emerging as a potential solution, with technology companies developing geospatial mapping and predictive analytics to identify crime patterns. Predictive policing, which uses data and statistical models to anticipate where crimes might occur or who might commit them, aims to allocate resources proactively. This differs from evidence-based policing, which analyzes past practices to apply proven strategies reactively.
In 2011, the LAPD implemented LASER, a program that used predictive analytics to pinpoint hot spots and chronic offenders. It showed promise, though it relied on human analysts instead of fully autonomous AI. Despite promising results, LASER was discontinued due to public backlash. [11]
While gathering data is a start, the real challenge lies in how police interpret and apply it. Some studies suggest predictive policing may merely shift crime to other areas rather than reduce it [12], requiring adaptable analytics to anticipate post-intervention crime patterns. Without specialized training or expert collaboration, police risk misinterpreting these patterns, leading to enforcement imbalances that weaken public trust. The public demands transparency and accountability. People want to know why officers are deployed in certain areas and how those decisions are made. [11]
The key challenge is achieving a “Goldilocks effect” in officer deployments — not too much, not too little, but just enough. Recent studies show that patrol planning based on AI analytics can enhance crime prevention by improving spatial coverage in areas most afflicted by crime. [13]
Where we want to go
Police managers often specialize in traffic enforcement, investigations or tactical units — but not necessarily in data analysis or predictive policing. This raises an important question: how do we ensure police leaders make informed decisions based on AI insights? The key is integrating AI data in a way that tells the true story of what’s happening in an area to guide problem-solving, not just to increase enforcement.
As AI evolves and public trust grows, law enforcement must move beyond basic crime prediction and focus on understanding why crime occurs in the first place. The goal is to identify and break the patterns that lead to criminal activity before it happens.
Imagine Officer Curry, a contract-based officer, starting his shift guided by real-time data. AI doesn’t replace his decision-making — it enhances it. His patrol vehicle integrates crime data, weather conditions and socio-ecological factors to predict hotspots, ensuring he’s deployed where crime is likely. The vehicle scans license plates, surveillance footage and databases, refreshing continuously. At one point, AI presents him with two patrol routes. He chooses Route A over Route B based on his beat knowledge. Along the way, he spots a BOLO vehicle, makes the stop and arrests the suspect.
The goal isn’t to replace officers like Curry. It’s to give them tools to work smarter, prevent crime and deploy resources strategically and fairly.
Tools like Risk Terrain Modeling (RTM) forecast future crimes in high-risk areas. [14] In 2019, Kansas City PD implemented RTM, using factors like business type, occupancy and lighting to identify risks and mitigate them proactively. This led to a 22% reduction in violent gun crime. [15] By integrating AI with methods like RTM, law enforcement can detect crime triggers from varied sources — economic conditions, housing access and community dynamics. AI helps distinguish correlation from causation, enabling more effective intervention strategies.
AI enhances policing by integrating vast real-time data, identifying patterns faster than humans and predicting where crime is likely. Future strategies should integrate law enforcement with social services, mental health professionals and community organizations to ensure enforcement isn’t the only response. [16] By using data-driven insights, agencies can place the right resources in the right place at the right time to prevent crime — not just respond to it.
This proactive approach mirrors how wildfires or contagions are managed: stop the spread before it escalates. Addressing root causes, AI-assisted policing can reduce crime without relying solely on traditional enforcement, creating a more fair, sustainable model for public safety.
🗣️ Start a discussion
How should your agency define the “just right” level of police presence? Share examples of when proactive patrols helped or hurt trust in your community — and how AI could have changed the outcome.
How do we get there?
How do we strike the right balance between overpolicing and underpolicing? The answer lies in using AI-enhanced mapping systems that help law enforcement deploy resources strategically while avoiding excessive or insufficient police presence. AI technologies like facial recognition and predictive analytics offer promising gains in efficiency. [17]
The challenges remain, including algorithmic bias, lack of transparency, over-surveillance and overreliance on AI. If unchecked, biased algorithms could worsen racial and socio-economic disparities. Transparency is critical, and communities need to understand how AI influences police deployment and decision-making. Agencies will need to develop clear policies to minimize risks and protect civil liberties while maximizing public safety.
As AI systems become more integrated, agencies must remain transparent and accountable, with strict ethical and legal safeguards to prevent misuse and maintain public trust. [18] This requires collaboration between law enforcement, policymakers, legal experts and civil rights organizations to develop standardized policies that ensure technology is used fairly and responsibly. Concerns about AI-guided enforcement — such as privacy and bias — must be addressed. [19] Publicly accessible guidelines should define how AI-driven tools are deployed, ensuring they enhance policing without leading to overreach or discrimination.
A balanced approach is crucial. AI should support — not replace — an officer’s discretion. Human oversight is essential to interpret data, generate insights and apply judgment in complex policing decisions. AI should serve as a tool to enhance situational awareness and resource allocation, not as an automated decision-maker.
💡 Training tip
Use the “Goldilocks principle” in scenario training: present officers with deployment plans that are too aggressive, too passive and just right. Debrief how AI-driven insights can support—not override—officer discretion in each case.
How to ensure responsible AI use
Implemented responsibly, AI-assisted policing could revolutionize law enforcement by optimizing officer deployment and crime prevention. To ensure responsible implementation, agencies must establish safeguards to detect and correct bias in deployment algorithms [20]. Communities should have access to public dashboards showing deployment patterns and their rationale. Officers must be trained annually on responsible AI use, including when to override recommendations to avoid blind reliance on technology.
AI should not only be a tool for enforcement — it should support prevention. When AI identifies high-risk areas, the response must include community outreach, job programs and mental health support alongside police. A balanced AI-enhanced strategy can bridge the gap between data-driven policing and community solutions.
⚙️ Implementation checklist
- Establish clear, transparent policies for AI-assisted deployments
- Train all officers annually on interpreting and overriding AI recommendations
- Collaborate with legal and civil rights experts to evaluate bias risk
- Deploy AI alongside — not instead of — officer discretion and community input
- Monitor and publicly report performance metrics and community feedback
Conclusion
As we move forward, we must balance public safety with civil liberties. A key challenge lies in convincing the public that data-driven policing powered by machines can respect privacy — though guarantees may be difficult. These systems must be transparent, accountable and respectful of privacy.
The goal is to place officers where they are truly needed to prevent criminals from victimizing community members — without crossing into government overreach.
AI has the potential to shift policing from reactive enforcement to proactive crime prevention. This can be achieved if it is used responsibly, ethically and with full human oversight. Confidence in police has increased, but concerns about fairness and accountability remain. [21] Transparency and accountability must be at the core of AI-assisted policing to build public trust and ensure fair application.
By integrating AI as a supportive tool — not a decision-maker — law enforcement can create a future where policing is smarter, fairer and more effective. The cost of social unrest can have long-term economic consequences, making it essential that AI is implemented responsibly. [22] The key to success lies in striking the right balance between technology and human judgment — always ensuring AI enhances, not replaces, officer discretion and community engagement.
| WATCH: AI questions for police chiefs:
References
- Adams IT, Mourtgos SM, Nix J. Turnover in large US policing agencies following the George Floyd protests. J Crim Justice. 2023;88:102105.
- Barrett P, Chen S. The economics of social unrest. International Monetary Fund. August 2021.
- Wilson JQ, Kelling GL. Broken windows: The police and neighborhood safety. The Atlantic Monthly. 1982;249(3):29–38.
- Sherman LW, Shaw JW, Rogan DP. The Kansas City Gun Experiment. National Institute of Justice. 1995.
- Hunt J. From crime mapping to crime forecasting: The evolution of place-based policing. NIJ J. 2019;281.
- La Vigne NG, Lachman P, Rao S, Matthews A. Stop and frisk: Balancing crime control with community relations. Office of Community Oriented Policing Services. 2014.
- Knox D, Mummolo J. Toward a general causal framework for the study of racial bias in policing. J Political Inst Polit Econ. 2020;1(1):1–38.
- Nix J, Huff J, Wolfe SE, Pyrooz DC, Mourtgos SM. When police pull back: Neighborhood-level effects of de-policing on violent and property crime, a research note. Criminology. 2024;62(1):156-171.
- Bhuvian J. LAPD ended predictive policing programs amid public outcry. The Guardian. November 7, 2021.
- Lexipol. Recent polls on policing show positive trends for U.S. law enforcement. Lexipol. June 17, 2024.
- Editorial Team. How will artificial intelligence affect policing and law enforcement? Artificial Intelligence +. July 15, 2023.
- Eck JE, Linning SJ, Bowers K. Does crime in places stay in places? Evidence for crime radiation from three narrative reviews. Aggress Violent Behav. 2024;78:101955. doi:10.1016/j.avb.2024.101955
- Chainey SP, Matias JAS, Nunes Junior FCF, et al. Improving the creation of hot spot policing patrol routes: Comparing cognitive heuristic performance to an automated spatial computation approach. ISPRS Int J Geo-Inf. 2021;10(8):560. doi:10.3390/ijgi10080560
- Marchment Z, Gill P. Systematic review and meta-analysis of risk terrain modeling (RTM) as a spatial forecasting method. Crime Sci. 2021;10(12):1–12.
- Gutierrez A. New software to help KCPD identify crime-risk areas. KSHB. March 31, 2019.
- Lanni A. Community-based and restorative-justice interventions to reduce over-policing. Am J Law Equality. 2022;2(1). doi:10.1162/ajle_a_00040
- Rigano C. Using artificial intelligence to address criminal justice needs. NIJ J. 2019;280.
- Jenkins M, Shields C. Taking a principled approach to AI in policing. Police Chief Magazine. April 10, 2024. Ezzeddine Y, Bayerl PS, Gibson H. Citizen perspectives on necessary safeguards to the use of AI by law enforcement agencies. CENTRIC, Sheffield Hallam University. 2023.
- Jayawardana V, Wu C. Learning eco-driving strategies at signalized intersections. Massachusetts Institute of Technology. 2022.
- Ray J. Confidence in police rises, but world doesn’t feel safer. Gallup. October 31, 2023.
- Lukens P. AI helps reveal hidden police bias and restore trust. Future Policing Institute. February 12, 2024.
About the author
Lieutenant Joe Bailey began his law enforcement career with the Sacramento Police Department in 2002 and currently serves in the Office of Specialized Services – Homeland Security Division. Over his career, he has held assignments in Major Crimes, the Training Division, Sacramento Police Academy, Field Training Unit, Patrol, Detectives and the Professional Standards Unit.
He holds a Bachelor of Science in Organizational Behavior from the University of San Francisco and a Master of Arts in Criminal Justice from American Military University. He is a graduate of the Sherman Block Supervisory Leadership Institute and has completed numerous advanced training programs in police leadership and management. He is married and the proud father of two sons.
This article is based on research conducted as a part of the CA POST Command College. It is a futures study of a particular emerging issue of relevance to law enforcement. Its purpose is not to predict the future; rather, to project a variety of possible scenarios useful for planning and action in anticipation of the emerging landscape facing policing organizations.
The article was created using the futures forecasting process of Command College and its outcomes. Managing the future means influencing it — creating, constraining and adapting to emerging trends and events in a way that optimizes the opportunities and minimizes the threats of relevance to the profession.