Trending Topics

The always-on video era and the new demands it places on police leadership

As video becomes constant and often external, police leadership is increasingly judged on how visibility, oversight and accountability are managed

Police leaders video wall

Image/ChatGPT

Editor’s note: Police1’s Always on: Video Technology Week examines how constant, connected video is reshaping modern policing. Policing now operates in an always-on video environment where encounters are recorded from multiple angles and shared in near real time. This article introduces the always-on video reality and explains why managing expectations, timing and perception has become a core leadership responsibility. Thanks to our Video Technology Week sponsor, Motorola.
Video Technology Week.png

Policing has entered the era of the always-on video environment. That reality is bigger than body-worn cameras (BWCs). It includes dash cameras, fixed CCTV and business cameras, doorbells, bystander smartphones, livestreams, jail cameras and increasingly agency-released video clips on social media that become part of the public narrative in near real time. To this end, video now exists before, during and after incidents, and that changes how legitimacy is earned, how oversight works and how leaders make decisions under pressure. [1]

The shift isn’t just about managing the deployment, collection and release of camera footage, it’s about managing expectations. Communities increasingly expect encounters to be recorded, reviewed and explained. Officers increasingly expect that their actions will be evaluated frame-by-frame.

Supervisors increasingly need to balance performance management, accountability, labor protections and officer wellness in a recorded environment. PERF’s “a decade later” assessment captures the central tension claiming how BWCs can improve accountability and evidence quality, but the outcomes depend heavily on policy, training, supervision, and organizational culture, not the device itself. [1]

At the same time, leaders are now confronting additional proactive challenges that expand the always-on environment beyond BWCs to Real Time Crime Centers (RTCCs) and Drone as a First Responder Programs (DFRs).

Real Time Crime Centers (RTCCs): The operational “fusion layer” for 24/7 visibility
RTCCs are often designed as centralized operational hubs that integrate multiple technologies and data sources to deliver time-sensitive situational awareness and investigative support. [2,3] In practice, this commonly includes coordinated monitoring and retrieval of video feeds, integration of license plate reader (LPR/ALPR) data, and the ability to push intelligence quickly back to field units. [2] Over time, RTCCs can shift an agency from “video as evidence” to “video as live operational awareness,” which changes not just what leaders can know, but what others expect leaders to know. [2]

DFR: The live aerial layer that can arrive before officers

Separate from RTCC operations (even when supported by them), DFR programs add a live overhead perspective that can reach the scene first and shape tactical decision-making in real time. In DFR models, drones can stream video to support dispatch and responding officers, creating new benefits and new expectations about how quickly leaders can “know what happened” in the earliest stages of an incident. [4,5] At the same time, this capability can also create an expedited expectation of the public to release footage when an incident occurs.

AI-assisted Video Management Systems (VMS)

In other words, leaders are now managing technology, culture and public trust simultaneously. Increasingly, that leadership challenge includes governing how agencies use AI-assisted Video Management Systems (VMS), the platforms that store, organize, search and help triage large volumes of video across BWCs, fixed cameras, RTCC feeds and DFR footage.

As video volume grows, AI-enabled search, automated tagging, transcription, rapid redaction and event “flagging” can improve speed and consistency, but they also shape what supervisors and the public see first, and therefore what they believe happened. Like any new technology, today’s police leaders must treat these AI outputs as decision aids, not decisions, and must define and administer clear guardrails for how AI-generated results are to be reviewed, validated, documented and used.

How video, supported by connected data and automated tools, is shaping police response, reporting and post-incident review

Always-on video is an operational reality

Despite the many cameras under the police agency’s control, in many critical events, the first widely shared footage is not the agency’s. A clip from a bystander, a doorbell camera, or a livestream on social media can set public expectations long before investigators have collected some evidence, let alone all the evidence in a case.

The agency’s BWC footage might arrive later, in multiple angles, with different audio quality and gaps created by activation rules, buffering, distance and physiological effects on perception. That sequencing matters: leaders are not only managing incidents, but they are also managing the timeline of visibility.

RTCC capability changes this sequencing further by shaping what agencies can see and use in real time. The National Institute of Justice (NIJ) describes RTCCs as units that integrate multiple data sources and technologies to provide actionable intelligence and situational awareness in real time, which can influence both tactical response and subsequent review. [2] DFR capability changes the sequencing again by adding an aerial viewpoint that can arrive first. Chula Vista’s DFR model, for example, has been publicly described as dispatching drones to high-priority calls for service to provide an aerial view that can guide responding units. [4,5]

With real-time monitoring, agencies may have relevant video “before” the first unit arrives: an LPR hit tied to a hotlist, a fixed camera view of a scene, or a drone feed that provides an overhead perspective in the first moments of a call. In DFR models, drones can stream video to support dispatch and responding officers, creating new benefits and new expectations about how quickly leaders can “know what happened” in the earliest stages of an incident. [4] At the same time, this can also create an expedited expectation of the public to release footage when an incident occurs.

The evidentiary upside is real. National research reviews and randomized trials have found that BWCs and other video collection have the potential to reduce complaints in some contexts and improve the fact-finding value of interactions, though results are mixed across jurisdictions and study designs. [6,7] The best-supported conclusion from systematic reviews is not that the “cameras will solve it,” but that “cameras plus a strong implementation, training and audit plan can provide additional information and context that can influence decision making.” [8]

Multiple sources, multiple stakeholders

In the always-on environment, footage is consumed by:

  • Prosecutors and defense counsel (discovery and trial strategy),
  • Internal affairs and administrative investigators (policy compliance),
  • Supervisors (coaching and performance management),
  • Media and community groups (legitimacy and transparency demands),
  • Unions and officers (officer safety, fair process and context),
  • City leaders and risk managers (liability exposure and reputational risk).

RTCC operations add another layer of stakeholders and interpretive pressure, because real-time monitoring often involves analysts, dispatch coordination, partnering agencies, and sometimes shared camera or LPR ecosystems. When RTCC workflows rely on integrated ALPR/LPR networks, for example, leaders must ensure clear governance about what triggers an alert, how long data is retained, who can query it, and how results are documented and audited, etc. because these choices directly shape both effectiveness and perceived legitimacy. [2, 9,10]

DFR programs add additional stakeholders and incentives as well,aviation oversight requirements, flight logging, video retention rules and public-records expectations that increasingly attach to drone video. [4,11]

That creates a leadership challenge where the same video can be interpreted differently depending on the viewer’s role, expertise and incentives. Leaders must design systems so that review is consistent, transparent, defensible, and fair, especially when the public expects quick answers despite the reality that investigations take time. [1]

AI-assisted VMS adds another layer because it doesn’t just store video, it prioritizes it through automated tagging, transcription, summarization and event detection. These capabilities can improve speed and consistency, but it also introduces a leadership risk.

The first AI-surfaced clip can become the “story” even when critical context sits elsewhere in the incident timeline. Police leaders must always require human review for AI-flagged outputs, define thresholds for alerts, and set documented standards for how AI-derived tags, transcripts, and summaries are used in supervision, investigations, and public communication.

Leadership implications of constant visibility

Always-on visibility reshapes leadership responsibilities across policy, supervision, training and communication:

  • Policy must match reality. If rules are unworkable in the field, officers will drift into informal norms, creating compliance gaps that later look like misconduct. [1]
  • Supervision becomes more formalized. Sergeants and Lieutenants are no longer managing only what they see and hear, they’re managing what can be replayed.
  • Training must include “video literacy.” Leaders need reviewers, including modern AI assisted platforms, that understand tactics, human perception and the limits of camera perspective, while still enforcing standards consistently. [1]
  • With VMS comes “AI literacy” where supervisors should understand common failure modes (false positives/false negatives), camera-angle and lighting limitations, metadata errors, and what an AI confidence indicator (if provided) does — and does not — mean.
  • Leaders must also ensure training clarifies how AI suggestions are documented, when they can be relied upon, and when human judgment must override or expand the review.
  • Communication becomes part of operations. Incident command increasingly includes coordination for lawful, timely disclosure and public explanation.

RTCC and DFR programs expand these implications. Leaders must now ensure “video literacy” extends beyond BWCs to include overhead drone perspective, fixed-camera limitations, LPR confidence/error understanding, and the ways real-time monitoring can shape officer behavior and public perception. [2, 4]

They also have to manage an additional risk: when agencies have real-time capability, the public may assume real-time certainty, when, in reality, monitoring depends on staffing, camera placement, connectivity, and the limits of what video can show.

These pressures ultimately surface in policy — or in the absence of it.

Policy and oversight: Governing the real-time video ecosystem

As RTCC and DFR capabilities mature, “policy” cannot remain limited to BWC activation and retention. Leaders need a visible, enforceable governance structure that answers three questions clearly:

  • What is the mission, and what is out of scope?
  • What rules govern access, monitoring, retention and release?
  • What oversight exists to ensure the program does not drift into “shadow policy”?

AI-assisted VMS should be explicitly governed, not assumed. Policies should define:

  • Authorized use cases (search, redaction, QA audits, real-time alerts).
  • Prohibited uses (e.g., generalized monitoring without predicate, or unreviewed AI conclusions).
  • Human-in-the-loop requirements for any AI-generated tag, transcript, summary, or alert used for discipline, critical incidents, or public release decisions.
  • Audit logging and routine audits of both user access and AI-driven outputs.
  • Retention and chain-of-custody rules for AI-generated metadata.
  • Performance validation and periodic re-validation to detect model drift and unintended bias.

Where agencies publish public impact/use policies, leaders should consider disclosing whether and how AI is used to analyze video, so transparency keeps pace with capability. This matters for leadership because it forces agencies to define purpose, training, access, safeguards, and auditing in a public-facing way. [12,13]

A useful model for how to formalize that governance is New York City’s POST Act framework, which requires the NYPD to publish Impact and Use Policies for surveillance technologies. [12] For example, NYPD’s publicly posted Domain Awareness System (DAS) policy describes DAS as centralizing lawfully obtained data to support tactical and strategic decision-making, with access rights limited based on lawful duty and with safeguards/audit protocols described as risk mitigations. [13]

Chula Vista also offers a “how to do it right” example for DFR specifically through visible transparency practices. The city publicly describes its drone program mission as safe, responsible, and transparent, and frames deployment around priority calls for service. [14] The department has even made public-facing flight activity data available through a flight data portal, reinforcing a core legitimacy principle: if you operate “always-on” tools, you need “always-on” transparency mechanisms to the degree the law and operations allow. [15, 16]

Public inspector general and oversight reporting about POST Act compliance also underscores that transparency systems require ongoing discipline, internal coordination, and leadership attention to prevent governance from lagging behind innovation. [17,18] The point here is not that any one agency is perfect regarding oversight, but that it only works when it is real and continuous.

Policy and review expectations are struggling to keep up

A recurring theme across the BWC literature is the “implementation gap” where policies exist, but activation, documentation, tagging, retention, and review practices vary by agency, division, unit, shift, supervisor, and call type. Research on activation behavior shows that compliance is shaped by situational and individual factors, and that “should have recorded” does not always translate into “did record,” even when policy is detailed. [19,20]

This is not always defiance but instead discretion because of policy ambiguity, rapidly evolving encounters, competing priorities (safety, radio traffic, coordination), and inconsistent supervisory expectations.

RTCCs can narrow or widen this gap depending on governance. When RTCC workflows pull together CCTV, LPR alerts, drone and other feeds, leaders must clarify how those streams are treated relative to BWC evidence: what must be preserved, what must be tagged, what enters the evidence system, what is intelligence-only, and how do downstream disclosure and retention rules apply. [2,3] Without that clarity, agencies risk inconsistent handling across divisions, units, shifts, etc. and inconsistent understanding by the public.

DFR programs create a parallel version of the implementation gap: a drone program can be well-designed on paper, but drift operationally unless dispatch criteria, flight rules, video retention practices, and review standards are consistently applied and audited. [4, 11, 14]

Inconsistent activation, review and accountability

IACP model policy guidance on BWCs emphasizes that agencies must be explicit about when recording is required, how to handle interruptions, and the supervisory responsibilities for compliance. [21] But even strong written rules can collapse if:

  • Supervisors don’t apply them consistently
  • Audits are rare or arbitrary
  • Discipline is unpredictable
  • Officers fear that minor mistakes will be treated as integrity failures.

Evidence reviews and practitioner guidance emphasize that program outcomes depend heavily on implementation choices, especially training, supervision, and culture. [1, 8, 22]

RTCC integration raises parallel accountability questions:

  • What is the agency’s policy on general incident monitoring duties of hot spots and under what circumstances is it authorized?
  • Who is responsible for direct camera monitoring during a call for service?
  • How search and query activity is logged, how long LPR hits are retained, and how to prevent “shadow policy” where real-time tools are used more broadly than the public has been told. [2, 10, 12, 13]

DFR programs also raise a distinct set of accountability questions:

  • When drone deployment is permitted, who may authorize launches?
  • How is footage retained and for how long?
  • How is drone video handled when it is not directly tied to an active investigation. [4, 11]

The more powerful the capability, the more important it becomes that auditing and documentation are routine, not exceptional.

Coming next: How constant visibility reshapes supervision, coaching and discipline — and why consistent video review practices are essential to officer trust and organizational legitimacy.

References

  1. Police Executive Research Forum. Body-worn cameras a decade later: what we know. Washington, DC: Police Executive Research Forum; 2023.
  2. National Institute of Justice. Real-time crime centers: integrating technology to enhance public safety. April 2025.
  3. Bureau of Justice Assistance. Real time crime center information. Washington, DC: U.S. Department of Justice; February 24, 2021.
  4. Office of Community Oriented Policing Services, CNA. Addressing crime through innovative technology: Chula Vista Police Department’s unmanned aircraft system program. COPS-R1170; 2024.
  5. City of Chula Vista Police Department. UAS (drone) program.
  6. Ariel B, Farrar WA, Sutherland A. The effect of police body-worn cameras on use of force and citizens’ complaints against the police: a randomized controlled trial. J Crim Law Criminol. 2018;108(3).
  7. Yale Institution for Social and Policy Studies. Body-worn cameras: what the evidence tells us. New Haven, CT: Yale ISPS; 2017.
  8. Campbell Collaboration. Impacts of body-worn cameras in policing. Systematic review.
  9. Congressional Research Service. Automated license plate readers: policy and operational considerations. Washington, DC: Congressional Research Service; 2023.
  10. Brayne S, Gilbert DT. Predictive policing and the courts: privacy, data and (dis)efficiency. ScienceDirect. 2025.
  11. Castañares v. Superior Court, 96 Cal App 5th 596 (Cal Ct App 2023).
  12. New York City Police Department. POST Act: impact and use policies.
  13. New York City Police Department. Domain awareness system: NYPD impact and use policy. April 9, 2021.
  14. City of Chula Vista Police Department. Real-time crime center (RTCC) / real-time operations center (RTOC). Open data page.
  15. AirData UAV. Chula Vista Police drone program historical flight data.
  16. New York City Department of Investigation, Office of the Inspector General for the NYPD. An assessment of NYPD’s response to the POST Act. November 2022.
  17. New York City Department of Investigation, Office of the Inspector General for the NYPD. An assessment of NYPD’s compliance with the POST Act. December 2024.
  18. U.S. Department of Justice, Office of Community Oriented Policing Services. Implementing a body-worn camera program: recommendations and lessons learned. Washington, DC: Office of Community Oriented Policing Services; 2014.
  19. Oglesby-Neal S, et al. Body-worn camera activation in police-citizen encounters. Preprint; 2024.
  20. Cho S, et al. Activate compliance: a multilevel study of factors associated with body-worn camera activation. Police Q. 2021.
  21. Major Cities Chiefs Association. Body-worn camera review following critical incidents: policy recommendation. May 2024.
Police agencies are capturing unprecedented amounts of video, but many lack a strategy for turning that data into learning

Dr. Joseph Lestrange is the CEO and Founder of VTP Leadership Solutions, a globally oriented consultancy committed to two core missions: helping law enforcement, public safety and national security organizations transform their stated values into consistent, real-world daily practices; and developing leaders at every stage — from emerging supervisors to seasoned executives — through education in value-based and adaptive leadership skills that are essential for navigating the complexities of 21st-century public service.

Previously, Dr. Lestrange served as the Executive Vice President and Chief Strategy and Innovation Officer for METIS Intelligence, North America where he led the development of AI-driven intelligence solutions for law enforcement, public safety, and security agencies. In this role, he also launched METIS Academy to demystify artificial intelligence to decision makers and provide a practical roadmap for responsibly integrating AI into daily operations.

Dr. Lestrange is also a founding Research Fellow at the Future Policing Institute’s Center on Policing and Artificial Intelligence (COP-AI) and serves as a Board Advisor to Crime Stoppers Global Solutions and a member of the Corporation Counsel for the National Police Athletic / Activities League.

Dr. Joseph J. Lestrange served over three decades as a commissioned federal law enforcement officer in multiple international, national, regional, and local leadership roles. In his last year of government service, Dr. Lestrange was appointed as Senior Agency Official to the U.S. Council on Transnational Organized Crime - Strategic Division, created by the President of the United States via Executive Order to develop “whole of government” solutions to complex public safety and national security challenges.

He retired from federal service in June 2022 as the Division Chief of the Public Safety & National Security Division at Homeland Security Investigations (HSI) Headquarters, where he provided executive oversight for strategic planning, budget formulation, stakeholder engagement, and resource development. In this role, he led multiple law enforcement intelligence, interdiction, and investigation units; oversaw agency programs, federal task forces, multi-agency operational centers; and directed case coordination initiatives across the globe.

To prepare future leaders, Dr. Lestrange is also a Course Developer and Adjunct Professor in Criminal Justice Management, Leadership Studies, Organizational Assessment and Design for Tiffin University’s doctoral programs in Criminal Justice, Global Leadership and Change Management; and an Adjunct Professor at Indiana Institute of Technology’s, College of Business and Continuing Professional Studies for MBA and undergraduate courses in Strategy, Sustainability, Homeland Security, and Emergency Management. He has also supervised doctoral level research and PhD dissertations in the areas of Police Recruitment & Retention, Adaptive Leadership, and Leading Multi-generational work forces.

Passionate about the continued advancement of policing, he is a contributing author to Lexipol: Police 1, authored a blueprint titled “The Way Forward: A Bedrock (25-Point) Plan for Public Safety, Community Investment, and Criminal Justice Reform,” and will soon release a non-fiction book titled “The Next Watch: Four Guiding Leadership Principles for the Future of Policing.”
Mike Ricupero is a nationally recognized authority in real-time crime center operations, law enforcement technology integration and biometric strategy. Michael dedicated over 20 years to the New York City Police Department, where he rose to become the Commanding Officer of the NYPD’s Real-Time Crime Center (RTCC) — the first of its kind in the nation. Under his leadership, the RTCC became a model for investigative support, facial identification, data fusion and emergency response coordination. He played a pivotal role in launching and expanding the NYPD’s Facial Identification Section, helping to establish national standards for ethical and effective biometric use.

Today, Michael shares his expertise with agencies nationwide, helping them build, scale and optimize Real-Time Crime Centers. He serves as a board advisor to the National RTCC Association and is a sought-after speaker at law enforcement and technology conferences.

As Director of Law Enforcement Strategic Engagement at RapidSOS, he leads transformative initiatives that modernize public safety through advanced data platforms, artificial intelligence and situational awareness tools.