The following is excerpted from “26 on 2026: A police leadership playbook.” Download the complete playbook here.
As policing enters 2026, artificial intelligence will no longer be an emerging issue, it will be a defining leadership test. The question for police executives is no longer whether AI will shape the profession, but whether it will be shaped by law enforcement or for law enforcement. AI demands a deliberate, values-driven strategy built upon innovation, accountability and people-centered leadership rather than ad hoc adoption.
Drawing from my four-part series, “AI Leadership in Policing: Moving from Skepticism to Stewardship,” I urge police leaders to adopt four integrated pillars.
1. The summons to leadership
Begin by acknowledging skepticism and leading through it. Public concern and internal anxiety around AI are real and justified. Leaders must resist both extremes: the “wait-and-see” approach that cedes influence to others, or conversely, the rush to deploy untested tools. Transparent communication, ethical framing, and early engagement with officers and communities are essential to building trust before AI technology is deployed.
If police leaders fail to answer the summons to leadership, AI adoption will be shaped by vendors, courts, and public pressure, rather than by professional judgment and democratic policing values.
AI demands a deliberate, values-driven strategy built upon innovation, accountability and people-centered leadership rather than ad hoc adoption.
2. Beyond the black box: How police can co-design AI that works
Move beyond vendor promises and the “black box” through co-design partnerships. AI systems are most effective and defensible when police leaders, practitioners, technologists, legal advisors, and community stakeholders help shape them from the outset. Agencies that participate with vendors early, through live pilots and proof of concepts (POCs) help to define standards, safeguards, and use cases.
Without co-design, agencies risk inheriting opaque systems designed by others, that misalign with operational realities and community expectations, undermining both effectiveness and legitimacy.
3. Governing the machine: Building accountability into AI in policing
Take ownership of governance and accountability. AI governance cannot be delegated to vendors or buried in IT units. Chiefs must lead formal and internal oversight structures, insist on human-in-the-loop review, kill switches, and require continuous validation through red-teaming, analytic audit cycles and bias testing so outputs can withstand legal scrutiny, operational realities and public oversight.
Absent strong governance, AI tools drift beyond policy and oversight, exposing agencies to legal risk, ethical failure, and irreversible loss of public trust.
4. The human element: Training and leading in the age of algorithmic policing
Invest in people, training and readiness. Every technological shift creates anxiety about time, workload and sustainability. Leaders must plan for training beyond initial vendor onboarding, embed AI into routine workflows and conduct AI readiness audits that assess adoption culture, policy, training capacity and long-term funding.
When training and readiness are neglected, AI becomes either unused or misused, amplifying officer frustration, operational inefficiency and community skepticism.
Conclusion
In 2026, effective AI leadership will not be measured by how advanced an agency’s technology is, but by how responsibly it is governed, whether it can be sustained, how well its people are prepared, and how legitimately it is received by the community it serves. The future of AI in policing belongs to leaders willing to answer the summons, moving from skepticism to stewardship, and taking responsibility for shaping what comes next.
This is an excerpt from “26 on 2026: A police leadership playbook.” Download the complete playbook here.