Trending Topics

What Trump’s AI Action Plan means for policing and public safety

From deepfake detection to predictive analytics, Trump’s AI strategy outlines new challenges and capabilities for modern policing

AI Processor Brain. Machine Learning - Digital Mind Technology Concept

At the core of the AI Action Plan are strategies designed to identify and mitigate risks associated with advanced AI technology, particularly those that threaten the nation’s critical infrastructure, economic stability and security environment.

BlackJack3D/Getty Images

Key takeaways

  • Prepare for synthetic media threats: AI-generated deepfakes pose real dangers to evidence integrity and judicial processes. Agencies must invest in detection tools, train personnel in digital forensics and advocate for updated rules of evidence to address authentication of AI-generated content.
  • Leverage AI for proactive, transparent policing: AI-powered crime prediction and real-time analytics offer valuable tools for preventing crime and optimizing resource deployment. However, transparency, fairness and privacy safeguards must be built into all AI-driven policing efforts.
  • Build infrastructure for secure digital evidence management: As synthetic media becomes more prevalent, departments need resilient systems for managing, storing and authenticating digital evidence — especially in cases involving manipulated or AI-generated content.
  • Develop internal AI and forensic expertise: Law enforcement agencies should invest in officer training focused on AI literacy and forensic science. This dual competency will be essential in identifying synthetic evidence, managing digital investigations and testifying in court.
  • Promote community trust through data-informed engagement: AI can enhance public safety by helping identify areas in need of outreach and intervention. Used responsibly, it enables departments to strengthen community engagement, improve transparency and co-create solutions with residents.

Artificial intelligence (AI) is rapidly redefining the technological and security landscape, giving rise to both unprecedented opportunities and complex risks. In response to these evolving dynamics, President Trump’s AI Action Plan (available in full below) takes a multifaceted approach to ensure the United States remains at the forefront of AI innovation while effectively safeguarding national security interests and upholding the principles of justice and public trust. The plan also recognizes how AI — particularly the emergence of synthetic media — has transformative implications for policing, community safety and the legal system.

Pillars of President Trump’s AI Action Plan and their impact

At the core of the AI Action Plan are strategies designed to identify and mitigate risks associated with advanced AI technology, particularly those that threaten the nation’s critical infrastructure, economic stability and security environment. The plan calls for rigorous evaluation and monitoring of frontier AI systems, with special attention paid to vulnerabilities that could be exploited by adversarial actors. This includes the potential for backdoors in adversary-developed AI systems, malicious foreign influence campaigns and the broader state of international AI competition.

| RELATED: How deepfakes will challenge the future of digital evidence in law enforcement

Collaboration is central to the plan’s success. Federal agencies — including the National Institute of Standards and Technology (NIST), the Department of Commerce’s Center for AI Safety and Innovation (CAISI), the Department of Energy (DOE), the Department of Defense (DOD) and the Intelligence Community (IC) — are tasked with recruiting top-tier AI researchers and experts. By leveraging federal talent and fostering partnerships with research institutions and the private sector, the government aims to maintain a cutting-edge capacity to analyze, assess and respond to emerging AI risks.

A robust evaluation infrastructure is also envisioned — one that is continuously updated to reflect the latest developments and threats. CAISI, in collaboration with national security agencies and research institutions, will lead the effort to ensure ongoing, comprehensive national security-related AI evaluations.

Investing in biosecurity: Harnessing and managing AI’s power in biology

AI’s potential in biology is described as nearly limitless, with the promise of groundbreaking discoveries ranging from new medical cures to innovative industrial applications. However, the plan is acutely aware that these same technologies may open new avenues for malicious actors. In particular, AI could facilitate the synthesis of dangerous pathogens or harmful biomolecules, posing grave biosecurity threats.

To address these challenges, the plan calls for a layered and proactive approach:

  • Mandatory screening and verification: All institutions receiving federal funding for scientific research must use nucleic acid synthesis tools and providers that implement robust sequence screening and customer verification. This requirement will be enforced through formal mechanisms rather than voluntary compliance, leaving no room for lapses or exploitation by ill-intentioned entities.
  • Facilitating data sharing for security: Led by the Office of Science and Technology Policy (OSTP), the government will convene stakeholders from both public and private sectors to develop secure, effective methods for sharing data between synthesis providers. The goal is improved detection of fraudulent or malicious customers, further enhancing the safety net against biological threats.
  • Continued vigilance and adaptation: As tools, policies and enforcement mechanisms evolve, collaboration with international allies and partners is essential to encourage widespread adoption and strengthen global biosecurity.

Synthetic media threats: Safeguarding the legal and judicial system

A significant focus of President Trump’s AI Action Plan is the danger posed by synthetic media — especially sophisticated AI-generated “deepfakes.” These can take the form of audio, images or video that mimic real individuals and events so convincingly that they blur the line between truth and fabrication. In the legal system, such deepfakes pose substantial risks of misinformation, evidence tampering and erosion of judicial integrity.

To counteract these threats, the plan emphasizes the urgent need for:

  • Specialized detection tools and standards: The plan proposes advancing NIST’s “Guardians of Forensic Evidence” deepfake evaluation program into a national guideline. Establishing a voluntary forensic benchmark would provide agencies and courts with reliable tools to authenticate digital evidence and distinguish genuine material from synthetic forgeries.
  • Policy guidance and rule enhancement: Agencies involved in adjudications should adopt standards akin to the proposed Federal Rules of Evidence Rule 901(c), specifically addressing the authentication of digital and synthetic evidence. This ensures the legal process is equipped to handle the unique challenges of AI-driven deception.
  • Active participation in legal standard-setting: The plan calls for submitting formal comments and recommendations to any proposed amendments to the Federal Rules of Evidence, ensuring new standards keep pace with technological advancements and evolving forensic practices.

Implications for policing and community safety

AI is also reshaping law enforcement and public safety. President Trump’s plan supports police forces and community safety initiatives through:

  • Predictive analytics and crime prevention: Advanced AI systems enable real-time crime analysis and predictive policing. When implemented transparently and responsibly, these tools can help law enforcement anticipate and prevent criminal activity while safeguarding against potential abuses or privacy violations.
  • Infrastructure for secure evidence handling: The plan stresses the need for secure and resilient infrastructure capable of managing sensitive citizen data and digital evidence — especially in cases involving synthetic media manipulation or deepfakes.
  • Skill building and resource optimization: Law enforcement agencies are encouraged to develop expertise in both AI technologies and forensic science. This dual focus ensures officers and investigators are prepared to identify synthetic media, authenticate evidence and respond effectively in both investigative and courtroom contexts.
  • Enhanced community engagement: By harnessing AI-driven insights, police can allocate resources more efficiently and engage proactively with communities, building trust and addressing new risks posed by advances in synthetic media.

AI readiness checklist for police leaders

Click on each heading for a summary of key action items:

AI readiness checklist for police leaders
  • Implement protocols for detecting and handling synthetic media (audio, video, image)
  • Train personnel in digital forensics and authentication of AI-generated content
  • Audit current digital evidence storage systems for resilience and security gaps
  • Provide foundational AI and data literacy training for command staff and investigators
  • Assign personnel to monitor developments in AI, synthetic media and legal standards
  • Establish an internal working group or liaison for AI policy and technology review
  • Align evidence handling practices with evolving Federal Rules of Evidence
  • Engage with prosecutors and courts on standards for AI-influenced evidence
  • Prepare sworn staff to testify on digital evidence authentication
  • Evaluate current or potential use of predictive analytics or real-time crime tools
  • Ensure all deployments include transparency safeguards and community input
  • Monitor for bias or unintended consequences in AI-based decision-making
  • Use AI tools to identify areas needing outreach or intervention
  • Communicate clearly about how technology supports safety and fairness
  • Invite public input on policies governing AI use in law enforcement

Challenges and opportunities ahead

While the Action Plan lays a solid foundation for AI-driven transformation, it recognizes the need to balance innovation with the protection of individual rights and civil liberties. Vigilance is required to prevent AI systems from reinforcing bias, and policies must evolve alongside technology to remain effective against ever-adapting threats.

| WATCH: Generative AI in law enforcement: Questions police chiefs need to answer

Conclusion

President Trump’s AI Action Plan represents a comprehensive strategy for leveraging AI’s benefits while protecting American society from its risks. With dedicated efforts to counter synthetic media threats, strengthen biosecurity, and equip the legal and policing systems for the digital era, the plan lays the groundwork for a future where the promise of AI is realized safely, justly and responsibly — ensuring national security, public safety and the enduring integrity of the justice system.

These forward-thinking strategies pave the way for new and innovative approaches to community policing. By integrating AI-driven analytics with traditional methods, law enforcement agencies can move beyond reactive responses and embrace proactive partnerships with the communities they serve. For example, predictive analytics can help identify areas that may benefit from additional outreach or intervention, allowing agencies to deploy resources and proactive programs where they are needed most — addressing issues before they escalate.

Moreover, AI-powered platforms can facilitate transparent communication between police and residents, offering real-time updates on incidents, safety initiatives and opportunities for public input. This fosters a collaborative environment where community members are empowered to participate in safety efforts, co-creating solutions that reflect local needs and values.

The fusion of technological innovation and community engagement not only boosts public trust but also opens doors to creative problem-solving. As law enforcement officers become adept in both digital forensics and AI, they can address emerging threats — such as synthetic media manipulation — with greater agility, while ensuring that policing remains fair, accountable and inclusive. In this new era, community policing evolves from a reactive model to a dynamic, data-informed partnership — one better equipped to meet the complexities of modern society.

Philip Lukens served as the Chief of Police in Alliance, Nebraska from December 2020 until his resignation in September 2023. He began his law enforcement career in Colorado in 1995. He is known for his innovative approach to policing. As a leading expert in AI, he has been instrumental in pioneering the use of artificial intelligence in tandem with community policing, significantly enhancing police operations and optimizing patrol methods.

His focus on data-driven strategies and community safety has led to significant reductions in crime rates and use of force. Under Lukens’ leadership, his agency received the Victims Services Award in 2022 from the International Association of Chiefs of Police. He is a member of the IACP-PPSEAI Committee - Human Trafficking Committee, PERF, NIJ LEADS and Future Policing Institute Fellow. He holds a Bachelor of Science in Criminology from Colorado Technical University. He has also earned multiple certifications, including Northwestern School of Police Staff and Command, PERF’s Senior Management Institute for Police, Supervisor Institute with FBI LEEDA, and IACP’s Leadership in Police Organizations.

Connect on LinkedIn.