Trending Topics

Policing with a digital partner: Preparing law enforcement for the age of AI

From scene response to report writing, AI has the potential to assist patrol officers — if agencies build the right guardrails

Digital partner.png

This article is part of the Police1 Leadership Institute, an initiative focused on the challenges facing law enforcement leaders tasked with guiding their agencies through rapid operational and technological change. Throughout this series, Police1 will explore what AI adoption means for police leaders — not just in terms of tools, but in leadership responsibility. That includes evaluating emerging technologies, managing legal and ethical risk, leading organizational change and ensuring innovation strengthens public trust rather than undermines it.

Examples in this article are illustrative. Agencies should ensure any use of artificial intelligence aligns with their policies, training standards and legal requirements.

By Commander Edward Caliento

Responding to the scene of a warehouse burglary, a police officer with two years on patrol takes a deep breath. It has been a while since he dealt with a burglary, and he quickly thinks about points of entry and exit while trying to remember how to dust for fingerprints and swab for DNA. His memory is murky. Is it swab for DNA first? If he can remember how to dust for prints, how does he collect them? It has been a few years since he went through the academy and was taught the finer art of crime scene investigations.

Instead of panicking and calling a senior partner or his sergeant, he confidently relies on his digital partner, iPARTNER, a wearable assistant for law enforcement powered by artificial intelligence. The young officer wears a pair of smart glasses equipped with video and audio capabilities, as well as a wearable device that monitors his biometrics. He prompts the system by saying, “Hey, Partner,” and is answered by a lifelike voice asking him what assistance he needs.

The officer explains that he is investigating a commercial burglary with a possible entry point through a broken window. iPARTNER asks the officer to walk to the location where the suspect made entry and examine the area. The cameras on the smart glasses scan the scene. In a polite voice, iPARTNER tells him he should dust for fingerprints and swab for DNA evidence. The system also asks the officer if he needs help dusting for prints. He admits to his assistant that he has never tried to locate fingerprints at a crime scene outside of training.

iPARTNER prompts a quick tutorial video on dusting for fingerprints that appears on the left portion of the eyeglasses. Then, under the supervision of iPARTNER, the officer dusts for prints, scans the area and is told there are two usable prints. The system then guides the officer through the process of photographing and lifting the prints and politely reminds him to swab for DNA.

At the end of his shift, with his report finished mostly by his assistant, the officer feels relieved, knowing that his AI assistant has significantly reduced his workload and improved the quality of his work. He quickly reviews the report using his smart glasses and then instructs the system to submit the report directly to his supervisor’s AI assistant, SARG.

Is this a science fiction novel or a new movie being released? Is this even far-fetched in 2025? With companies rolling out translation devices and AI-assisted report writing on body-worn cameras, the ability for an easy-to-use “Siri for cops” is right around the corner. [1] Police have long turned to technology to enhance their effectiveness. [2] The introduction of fingerprinting in the early 1900s and the establishment of crime laboratories in the 1920s significantly expanded investigative capacity. Similarly, the development of two-way radios and the widespread adoption of automobiles in the 1930s increased police productivity and improved response times. Nevertheless, technological progress in policing has often been slow and uneven. [2]

Law enforcement agencies face increasing demands, requiring officers to retain extensive knowledge of policies, procedures, case law and local ordinances. Nationwide struggles with recruitment and retention only intensify the challenge of maintaining a well-trained, responsive workforce. As public safety expectations grow, virtual law enforcement partners powered by artificial intelligence, wearable technologies and cloud-based systems offer a transformative solution to enhance officer efficiency and community safety.

AI-powered products to transform the police

Companies are already offering products based on generative artificial intelligence models, including dispatch assisting, voice transcription, translation and report-writing software. Newly released AI-powered report writing, described by agencies using the technology as a “game changer,” has the potential to save hours usually spent on paperwork, allowing officers to focus more on policing. [3]

Axon, now marketing an AI-powered report-writing system named Draft One, claims that the technology has “contributed to over 100,000 incident reports, saving officers 2.2 million minutes.” [4] According to the San Mateo Police Department, which is testing the system, “Draft One can save about 40% of the time used for writing reports.” [4] However, the implementation of AI and interconnected systems will not only require upfront investment but also create long-term financial obligations related to licensing, data storage, software updates and cybersecurity. The bottom line of a municipality’s budget is where the haves and have-nots may be a deciding factor in community safety in the age of AI.

Science fiction author William Gibson said it best, “The future is already here — it’s just unevenly distributed.” For law enforcement, this has been the constant modus operandi. Across the country, some departments have the resources while others lack the budget for new technologies. New technologies, such as body-worn cameras, can be costly, as municipalities must budget not only for the hardware but also for data storage and additional staff to manage video data. [5] All this audio and video footage is public record and has led to an increase in public records requests. It is expected that as more agencies add body-worn video, there will be a continued increase in these requests. [6] These record requests take time and can be costly for an agency. With a 2020 California Supreme Court opinion stating that departments can only recoup the cost of duplicating the requested record, the public cannot be charged for other costs, such as redactions of records. [7]

Although AI could help relieve some of this burden, if an agency is unable to afford to purchase a robust system or any system at all, it may face challenges. Artificial intelligence can also add to this burden. AI systems can generate far more record requests than any person could in a comparable amount of time. [8] Moreover, although AI systems to assist agencies with public records requests do exist, very few public agencies “have the funding, training or infrastructure to deploy it responsibly.” [8] These challenges underscore the importance of meticulous planning and strategic resource allocation in the adoption of AI in law enforcement.

Challenges in the cloud

The rapid evolution of evidence-gathering technology, from analog film to digital cameras to ubiquitous body-worn video, underscores the pace of change in policing. With the advent of new technologies such as wearable smart devices, artificial intelligence, robotics, electronic monitoring systems, cloud computing, dash and body cameras and software solutions, agencies must adapt hiring and training strategies to ensure new officers are equipped to navigate an increasingly digital environment.

However, these benefits depend on the ability to manage complex cloud infrastructures securely, a challenge that will intensify as the volume and value of data grow. [9] A further challenge concerns adaptation and implementation. Agencies that fail to acquire and integrate emerging technologies risk being marginalized as the field advances.

The adoption of body-worn cameras illustrates this trajectory. Initially regarded as peripheral, body-worn cameras have become an industry standard, with public expectations shifting to the extent that video evidence is now often viewed as indispensable. Significantly, however, the costs of implementation extended far beyond the initial purchase. The long-term requirements of data storage and management became significant and ongoing burdens. [10] The adoption of artificial intelligence is expected to follow a similar path. Early skepticism will likely yield to widespread reliance while simultaneously generating new and sustained demands on agency resources.

Without sustainable funding strategies, agencies risk adopting technologies they cannot maintain, leading to uneven service delivery and potential loss of community trust. Therefore, it is crucial for law enforcement agencies to adopt AI technologies responsibly, ensuring that they are used to enhance public safety and not to infringe on individual rights.

While these innovations offer efficiency gains, concerns remain about the accuracy and reliability of their use. Are officers and deputies currently utilizing AI to assist them in their duties? Are they using systems like ChatGPT to assist in authoring their reports? Are they using Google Gemini to determine laws to enforce or provide answers to questions? The truth might surprise. According to Microsoft’s 2024 Work Trend Index, “three out of four employees were using AI, and 78% of them were ‘bringing it from home,’ without waiting for corporate tools.” [11]

Shadow AI

People are utilizing what has been dubbed “shadow AI,” which IBM defines as “the unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the information technology department.” [12] An example of shadow AI is an employee using tools like OpenAI’s ChatGPT to enhance productivity and streamline processes by automating tasks such as text editing or data analysis.

The use of these types of AI, which IT departments are often unaware of, poses a risk to the department or organization, exposing them not only to data security risks but also to risks to the organization’s reputation. [12] An example of this issue has already surfaced in Washington state, where the King County Prosecutor’s Office sent an email to police chiefs in the county stating the prosecutor’s office “will not accept any police report narratives that have been produced with the assistance of AI,” due to the discovery that an AI report-writing system added an officer’s name who was not at the scene. [13]

The prosecutor’s office also expressed concern about other systems, such as ChatGPT, which uses information entered to “learn and disseminate” and “are not Criminal Justice Information Services compliant,” as many portions of law enforcement work must remain private. [13] A recent survey also identified an “AI readiness gap,” finding that employees using AI most often are frequently those with the least training or guidance. [14] Within law enforcement, this gap raises questions about accuracy, admissibility of evidence and accountability in courtroom testimony. [3]

Policy and governance, which are crucial in addressing this gap, currently lag behind a workforce in which “over half of all employees admit to using unapproved tools, and they wouldn’t stop even if you banned them.” [11] Although law enforcement professionals experimenting independently with AI tools highlight their interest, it also reveals the risks associated with using unvetted or open-source platforms without proper oversight.

AI adding information that is untrue would not only be “devastating for the case,” but could also cause a law enforcement professional to be placed on the Brady list, [15] a database that tracks officers or deputies who make false statements or whose word cannot be trusted in court proceedings. [16]

Policy

Understanding the potential of artificial intelligence technology in law enforcement is important. While the cost-benefit analysis is challenging, the potential for increased efficiency and resource allocation is considerable. However, to maximize the return on investment, it is essential to establish clear policies and provide proper training. [10] An effective AI policy begins by clearly defining what qualifies as artificial intelligence and specifying which tools are covered, ensuring that all users understand the scope and limitations of its use. [17]

While AI can enhance efficiency and support public safety goals, its success depends on thoughtful planning, adequate resources and a realistic awareness of limitations. Importantly, some organizations may need to delay adoption until they have the capacity, training and safeguards necessary to use AI responsibly. [18] Agencies should maintain an approved list of vetted AI platforms to mitigate the risk of employees leveraging any AI platform they come across, while providing a transparent process for requesting review of new tools. [17]

This process should involve a thorough evaluation of the benefits and capabilities of AI systems while protecting individual rights, aligning with the agency’s goals and ensuring procedural fairness. [19] Policies should also give guidance not only on where an AI system can be used, but just as importantly, when systems should not be used. [20]

The policy should emphasize respect for privacy rights, ongoing monitoring for bias in employment-related uses and structured governance to oversee tool approval, compliance and legal updates. Ongoing collaboration with stakeholders, ensuring open conversations and information sharing, works toward identifying new opportunities while reducing risk. [19] Importantly, since AI is constantly evolving, the commitment to frequent updates reassures users of the policy’s adaptability. This requirement ensures that users review the most current version of the policy before using AI on any project, thereby maintaining its relevance and effectiveness. [17]

Training

When rolling out any new technology, thorough training is key to ensuring that procedures, laws and policies are followed. The use of AI without clear guidelines can introduce unintended consequences. [21] Once proper policies are in place, personnel should understand the guidelines and recognize where AI can be beneficial.

AI-powered technology can assist officers in improving workflows, reducing errors and lowering administrative burdens, allowing more time for community service. [22] There are many examples of how AI can help alleviate the burden on employees, such as analyzing CAD and RMS data, providing transcription and translation assistance and delivering real-time information to help law enforcement personnel make better-informed decisions. [22]

Each organization must assess its individual department and community needs and ensure that regular training in AI systems covers not just utilization, but understanding limitations, recognizing biases and protecting against overreliance on these programs. [20] Training employees on the appropriate use of AI, specifically when, where and how to apply it in daily work, is critical to successful integration. Providing clear guidance and ongoing instruction ensures that staff can utilize AI tools effectively and ethically, while involving end users in the implementation process can improve system efficiency and user trust. [20]

Funding

While looking to fund technological advancements, agencies should begin by creating a formal technology strategic plan that takes into account agency needs, user needs, community perception, funding and a plan for implementation and change management. [23] Given the challenges of cost, one solution is for law enforcement agencies to collaborate regionally and nationally to establish common standards and a larger pool of resources for artificial intelligence systems.

Utilizing joint efforts can reduce duplication with nearby agencies, demonstrate community support and strengthen purchasing power. [24] There are also state and federal grants, such as the State Homeland Security Grant Program through the Federal Emergency Management Agency, which can be used to address public safety risks for state, tribal and local law enforcement programs. [24] Although finding the right grant can be challenging, the federal government allocates billions of dollars each year through formula-based and competitive grants that can support law enforcement agencies. [24]

Partnering with the community, such as through a nonprofit foundation, can significantly aid in funding upgrades needed for new technologies. Community foundations can play a crucial role by facilitating donations and corporate sponsorships. They can also serve as a bridge between the community and the police, helping achieve stakeholder buy-in as well as financial support. This engagement benefits not only funding efforts but also trust and understanding between law enforcement agencies and the communities they serve.

Conclusion

With the increased use of artificial intelligence, organizations are entering a new normal. Microsoft’s 2025 Work Trend Index discusses a “frontier firm,” describing an organization designed around on-demand intelligence and powered by hybrid teams of humans and agents, allowing those companies to scale rapidly, operate with agility and generate value faster. [25] The future of policing may follow a similar path, with organizations built on seamless collaboration between people and machines.

Instead of replacing officers, AI partners will serve as constant sources of on-demand intelligence, while human judgment and leadership remain central. In this hybrid model, wearable assistants and cloud-based systems handle information at machine speed, freeing officers to focus on community engagement, decision-making and ethical leadership.

The result of adopting AI-augmented systems could be a policing blueprint that is more agile, adaptive and capable of delivering public value in ways traditional models cannot match. Establishing clear policies, designated funding streams, structured training and consistent use of new technologies is essential to ensure that implementation produces benefits for both the department and the community rather than undermining stakeholder trust.

References

  1. Jacobson N. How can voice assistant benefit law enforcement? CPI OpenFox. February 20, 2023.
  2. Burkhalter E. The evolution and development of police technology National Institute of Justice. July 1, 1998.
  3. Elkins FC. Using AI to write police reports Community Policing Dispatch. January 2025.
  4. Daniels C, Johnson T, Nejman A. Time saved or justice threatened? Police departments are using AI to write initial reports KATV. July 11, 2025.
  5. Chauhan Y, Rodriguez E, Young G. Body cameras Encyclopædia Britannica. May 8, 2025.
  6. Granicus. Five ways that body-worn camera footage is impacting public records requests July 10, 2023.
  7. Kruger LR. National Lawyers Guild v City of Hayward (S252445) Supreme Court of California. May 28, 2020.
  8. Staples D. Public records laws aren’t ready for the age of AI — and that’s a problem Medium. June 27, 2025.
  9. Ashkenazi E. Observo AI, real-time data pipelines, and the future of the autonomous SOC SentinelOne. September 12, 2025.
  10. Jin H. Analyzing the cost-effectiveness of police body cameras: A comprehensive review BOBLOV. November 23, 2024.
  11. Dans E. What is “BYOAI” and why it’s a serious threat to your company Fast Company. September 10, 2025.
  12. Krantz T, Jonker A, McGrath A. What is shadow AI? IBM. April 17, 2025.
  13. Clark D. Email to chiefs of police regarding AI-assisted report writing King County Prosecuting Attorney’s Office. September 2024.
  14. Lichtenberg N. AI shame is running rampant in the corporate sector Fortune. August 29, 2025.
  15. Harris M. The Brady list and its implications for law enforcement officers Police1. 2024.
  16. Baumann C. Understanding Brady disclosures in policing Lexipol. 2023.
  17. LexisNexis. 8 tips for creating a comprehensive “AI in the workplace” policy February 21, 2025.
  18. Mello-Klein C. Law enforcement is learning how to use AI more ethically thanks to a northeastern expert Northeastern Global News. July 15, 2025.
  19. Council on Criminal Justice. The implications of AI for criminal justice November 12, 2024.
  20. Bhadkamkar M. Ethical use of AI in policing: Balancing innovation and accountability SoundThinking. May 21, 2025.
  21. PowerDMS. AI for public safety: 7 practical steps to get started today August 8, 2025.
  22. Sims K. AI as a force multiplier for public safety Wired. March 28, 2025.
  23. COPS Office. Law enforcement best practices US Department of Justice. 2019.
  24. Wood A. Leveraging law enforcement grant funding SoundThinking. February 13, 2025.
  25. Microsoft. 2025: The year the frontier firm is born April 23, 2025.

About the author

Commander Edward Caliento began his career in law enforcement with the Ventura (California) Police Department in 2006. He has held a variety of assignments over the course of his career, including Major Crimes Detective, Crisis Negotiator on the SWAT Team, Academy Instructor, and Peer Support Team member. Before being promoted to Commander, he served as a Detective Sergeant in the Street Crimes Unit and as the Team Leader of the Crisis Negotiation Team.

Edward served in the United States Marine Corps and is a veteran of Desert Shield/Desert Storm. He has a bachelor’s degree in history from Southern Illinois University and a master’s degree in Law Enforcement and Public Safety Leadership from the University of San Diego. He is a graduate of the Sherman Block Institute Class 477 and is currently enrolled in Command College Class 75. He is assigned as a Watch Commander and oversees the Mounted Patrol, Department Fleet, Employee Recognition Committee, and the Peer Support Team.

This article is based on research conducted as a part of the CA POST Command College. It is a futures study of a particular emerging issue of relevance to law enforcement. Its purpose is not to predict the future; rather, to project a variety of possible scenarios useful for planning and action in anticipation of the emerging landscape facing policing organizations.

POLICE1 LEADERSHIP INSTITUTE
The Police1 Leadership Institute is designed for law enforcement leaders responsible for guiding their agencies through rapid change. Each year, the Institute focuses on a defining force shaping modern policing. In 2026, that force is artificial intelligence.
How outdated hiring systems, long timelines and staffing losses are pushing agencies to rethink recruitment — and what AI means for police leaders navigating that shift
Why culture, training and governance — not software — determine whether artificial intelligence helps or harms your agency
CES isn’t about gadgets or Las Vegas. It’s about understanding how emerging technologies will shape policing long before policy and training catch up
In 2026, AI will test police leadership more than any new technology in decades. Chiefs who hesitate, or jump in without a plan, risk losing control of ethics, accountability and public trust
As 911 centers face growing demand, AI call automation can reduce hold times, ease workload and improve service without adding personnel
Learn how forward-thinking chiefs are applying AI to streamline documentation, optimize resources and strengthen accountability

Police1 Special Contributors represent a diverse group of law enforcement professionals, trainers, and industry thought leaders who share their expertise on critical issues affecting public safety. These guest authors provide fresh perspectives, actionable advice, and firsthand experiences to inspire and educate officers at every stage of their careers. Learn from the best in the field with insights from Police1 Special Contributors.

(Note: The contents of personal or first person essays reflect the views of the author and do not necessarily reflect the opinions of Police1 or its staff.)

Interested in expert-driven resources delivered for free directly to your inbox? Subscribe for free to any our our Police1 newsletters.