Policing has entered the era of the always-on video environment. That reality is bigger than body-worn cameras (BWCs). It includes dash cameras, fixed CCTV and business cameras, doorbells, bystander smartphones, livestreams, jail cameras and increasingly agency-released video clips on social media that become part of the public narrative in near real time. To this end, video now exists before, during and after incidents, and that changes how legitimacy is earned, how oversight works and how leaders make decisions under pressure. [1]
The shift isn’t just about managing the deployment, collection and release of camera footage, it’s about managing expectations. Communities increasingly expect encounters to be recorded, reviewed and explained. Officers increasingly expect that their actions will be evaluated frame-by-frame.
Supervisors increasingly need to balance performance management, accountability, labor protections and officer wellness in a recorded environment. PERF’s “a decade later” assessment captures the central tension claiming how BWCs can improve accountability and evidence quality, but the outcomes depend heavily on policy, training, supervision, and organizational culture, not the device itself. [1]
At the same time, leaders are now confronting additional proactive challenges that expand the always-on environment beyond BWCs to Real Time Crime Centers (RTCCs) and Drone as a First Responder Programs (DFRs).
Real Time Crime Centers (RTCCs): The operational “fusion layer” for 24/7 visibility
RTCCs are often designed as centralized operational hubs that integrate multiple technologies and data sources to deliver time-sensitive situational awareness and investigative support. [2,3] In practice, this commonly includes coordinated monitoring and retrieval of video feeds, integration of license plate reader (LPR/ALPR) data, and the ability to push intelligence quickly back to field units. [2] Over time, RTCCs can shift an agency from “video as evidence” to “video as live operational awareness,” which changes not just what leaders can know, but what others expect leaders to know. [2]
DFR: The live aerial layer that can arrive before officers
Separate from RTCC operations (even when supported by them), DFR programs add a live overhead perspective that can reach the scene first and shape tactical decision-making in real time. In DFR models, drones can stream video to support dispatch and responding officers, creating new benefits and new expectations about how quickly leaders can “know what happened” in the earliest stages of an incident. [4,5] At the same time, this capability can also create an expedited expectation of the public to release footage when an incident occurs.
AI-assisted Video Management Systems (VMS)
In other words, leaders are now managing technology, culture and public trust simultaneously. Increasingly, that leadership challenge includes governing how agencies use AI-assisted Video Management Systems (VMS), the platforms that store, organize, search and help triage large volumes of video across BWCs, fixed cameras, RTCC feeds and DFR footage.
As video volume grows, AI-enabled search, automated tagging, transcription, rapid redaction and event “flagging” can improve speed and consistency, but they also shape what supervisors and the public see first, and therefore what they believe happened. Like any new technology, today’s police leaders must treat these AI outputs as decision aids, not decisions, and must define and administer clear guardrails for how AI-generated results are to be reviewed, validated, documented and used.
Always-on video is an operational reality
Despite the many cameras under the police agency’s control, in many critical events, the first widely shared footage is not the agency’s. A clip from a bystander, a doorbell camera, or a livestream on social media can set public expectations long before investigators have collected some evidence, let alone all the evidence in a case.
The agency’s BWC footage might arrive later, in multiple angles, with different audio quality and gaps created by activation rules, buffering, distance and physiological effects on perception. That sequencing matters: leaders are not only managing incidents, but they are also managing the timeline of visibility.
RTCC capability changes this sequencing further by shaping what agencies can see and use in real time. The National Institute of Justice (NIJ) describes RTCCs as units that integrate multiple data sources and technologies to provide actionable intelligence and situational awareness in real time, which can influence both tactical response and subsequent review. [2] DFR capability changes the sequencing again by adding an aerial viewpoint that can arrive first. Chula Vista’s DFR model, for example, has been publicly described as dispatching drones to high-priority calls for service to provide an aerial view that can guide responding units. [4,5]
With real-time monitoring, agencies may have relevant video “before” the first unit arrives: an LPR hit tied to a hotlist, a fixed camera view of a scene, or a drone feed that provides an overhead perspective in the first moments of a call. In DFR models, drones can stream video to support dispatch and responding officers, creating new benefits and new expectations about how quickly leaders can “know what happened” in the earliest stages of an incident. [4] At the same time, this can also create an expedited expectation of the public to release footage when an incident occurs.
The evidentiary upside is real. National research reviews and randomized trials have found that BWCs and other video collection have the potential to reduce complaints in some contexts and improve the fact-finding value of interactions, though results are mixed across jurisdictions and study designs. [6,7] The best-supported conclusion from systematic reviews is not that the “cameras will solve it,” but that “cameras plus a strong implementation, training and audit plan can provide additional information and context that can influence decision making.” [8]
Multiple sources, multiple stakeholders
In the always-on environment, footage is consumed by:
- Prosecutors and defense counsel (discovery and trial strategy),
- Internal affairs and administrative investigators (policy compliance),
- Supervisors (coaching and performance management),
- Media and community groups (legitimacy and transparency demands),
- Unions and officers (officer safety, fair process and context),
- City leaders and risk managers (liability exposure and reputational risk).
RTCC operations add another layer of stakeholders and interpretive pressure, because real-time monitoring often involves analysts, dispatch coordination, partnering agencies, and sometimes shared camera or LPR ecosystems. When RTCC workflows rely on integrated ALPR/LPR networks, for example, leaders must ensure clear governance about what triggers an alert, how long data is retained, who can query it, and how results are documented and audited, etc. because these choices directly shape both effectiveness and perceived legitimacy. [2, 9,10]
DFR programs add additional stakeholders and incentives as well,aviation oversight requirements, flight logging, video retention rules and public-records expectations that increasingly attach to drone video. [4,11]
That creates a leadership challenge where the same video can be interpreted differently depending on the viewer’s role, expertise and incentives. Leaders must design systems so that review is consistent, transparent, defensible, and fair, especially when the public expects quick answers despite the reality that investigations take time. [1]
AI-assisted VMS adds another layer because it doesn’t just store video, it prioritizes it through automated tagging, transcription, summarization and event detection. These capabilities can improve speed and consistency, but it also introduces a leadership risk.
The first AI-surfaced clip can become the “story” even when critical context sits elsewhere in the incident timeline. Police leaders must always require human review for AI-flagged outputs, define thresholds for alerts, and set documented standards for how AI-derived tags, transcripts, and summaries are used in supervision, investigations, and public communication.
Leadership implications of constant visibility
Always-on visibility reshapes leadership responsibilities across policy, supervision, training and communication:
- Policy must match reality. If rules are unworkable in the field, officers will drift into informal norms, creating compliance gaps that later look like misconduct. [1]
- Supervision becomes more formalized. Sergeants and Lieutenants are no longer managing only what they see and hear, they’re managing what can be replayed.
- Training must include “video literacy.” Leaders need reviewers, including modern AI assisted platforms, that understand tactics, human perception and the limits of camera perspective, while still enforcing standards consistently. [1]
- With VMS comes “AI literacy” where supervisors should understand common failure modes (false positives/false negatives), camera-angle and lighting limitations, metadata errors, and what an AI confidence indicator (if provided) does — and does not — mean.
- Leaders must also ensure training clarifies how AI suggestions are documented, when they can be relied upon, and when human judgment must override or expand the review.
- Communication becomes part of operations. Incident command increasingly includes coordination for lawful, timely disclosure and public explanation.
RTCC and DFR programs expand these implications. Leaders must now ensure “video literacy” extends beyond BWCs to include overhead drone perspective, fixed-camera limitations, LPR confidence/error understanding, and the ways real-time monitoring can shape officer behavior and public perception. [2, 4]
They also have to manage an additional risk: when agencies have real-time capability, the public may assume real-time certainty, when, in reality, monitoring depends on staffing, camera placement, connectivity, and the limits of what video can show.
These pressures ultimately surface in policy — or in the absence of it.
Policy and oversight: Governing the real-time video ecosystem
As RTCC and DFR capabilities mature, “policy” cannot remain limited to BWC activation and retention. Leaders need a visible, enforceable governance structure that answers three questions clearly:
- What is the mission, and what is out of scope?
- What rules govern access, monitoring, retention and release?
- What oversight exists to ensure the program does not drift into “shadow policy”?
AI-assisted VMS should be explicitly governed, not assumed. Policies should define:
- Authorized use cases (search, redaction, QA audits, real-time alerts).
- Prohibited uses (e.g., generalized monitoring without predicate, or unreviewed AI conclusions).
- Human-in-the-loop requirements for any AI-generated tag, transcript, summary, or alert used for discipline, critical incidents, or public release decisions.
- Audit logging and routine audits of both user access and AI-driven outputs.
- Retention and chain-of-custody rules for AI-generated metadata.
- Performance validation and periodic re-validation to detect model drift and unintended bias.
Where agencies publish public impact/use policies, leaders should consider disclosing whether and how AI is used to analyze video, so transparency keeps pace with capability. This matters for leadership because it forces agencies to define purpose, training, access, safeguards, and auditing in a public-facing way. [12,13]
A useful model for how to formalize that governance is New York City’s POST Act framework, which requires the NYPD to publish Impact and Use Policies for surveillance technologies. [12] For example, NYPD’s publicly posted Domain Awareness System (DAS) policy describes DAS as centralizing lawfully obtained data to support tactical and strategic decision-making, with access rights limited based on lawful duty and with safeguards/audit protocols described as risk mitigations. [13]
Chula Vista also offers a “how to do it right” example for DFR specifically through visible transparency practices. The city publicly describes its drone program mission as safe, responsible, and transparent, and frames deployment around priority calls for service. [14] The department has even made public-facing flight activity data available through a flight data portal, reinforcing a core legitimacy principle: if you operate “always-on” tools, you need “always-on” transparency mechanisms to the degree the law and operations allow. [15, 16]
Public inspector general and oversight reporting about POST Act compliance also underscores that transparency systems require ongoing discipline, internal coordination, and leadership attention to prevent governance from lagging behind innovation. [17,18] The point here is not that any one agency is perfect regarding oversight, but that it only works when it is real and continuous.
Policy and review expectations are struggling to keep up
A recurring theme across the BWC literature is the “implementation gap” where policies exist, but activation, documentation, tagging, retention, and review practices vary by agency, division, unit, shift, supervisor, and call type. Research on activation behavior shows that compliance is shaped by situational and individual factors, and that “should have recorded” does not always translate into “did record,” even when policy is detailed. [19,20]
This is not always defiance but instead discretion because of policy ambiguity, rapidly evolving encounters, competing priorities (safety, radio traffic, coordination), and inconsistent supervisory expectations.
RTCCs can narrow or widen this gap depending on governance. When RTCC workflows pull together CCTV, LPR alerts, drone and other feeds, leaders must clarify how those streams are treated relative to BWC evidence: what must be preserved, what must be tagged, what enters the evidence system, what is intelligence-only, and how do downstream disclosure and retention rules apply. [2,3] Without that clarity, agencies risk inconsistent handling across divisions, units, shifts, etc. and inconsistent understanding by the public.
DFR programs create a parallel version of the implementation gap: a drone program can be well-designed on paper, but drift operationally unless dispatch criteria, flight rules, video retention practices, and review standards are consistently applied and audited. [4, 11, 14]
Inconsistent activation, review and accountability
IACP model policy guidance on BWCs emphasizes that agencies must be explicit about when recording is required, how to handle interruptions, and the supervisory responsibilities for compliance. [21] But even strong written rules can collapse if:
- Supervisors don’t apply them consistently
- Audits are rare or arbitrary
- Discipline is unpredictable
- Officers fear that minor mistakes will be treated as integrity failures.
Evidence reviews and practitioner guidance emphasize that program outcomes depend heavily on implementation choices, especially training, supervision, and culture. [1, 8, 22]
RTCC integration raises parallel accountability questions:
- What is the agency’s policy on general incident monitoring duties of hot spots and under what circumstances is it authorized?
- Who is responsible for direct camera monitoring during a call for service?
- How search and query activity is logged, how long LPR hits are retained, and how to prevent “shadow policy” where real-time tools are used more broadly than the public has been told. [2, 10, 12, 13]
DFR programs also raise a distinct set of accountability questions:
- When drone deployment is permitted, who may authorize launches?
- How is footage retained and for how long?
- How is drone video handled when it is not directly tied to an active investigation. [4, 11]
The more powerful the capability, the more important it becomes that auditing and documentation are routine, not exceptional.
Coming next: How constant visibility reshapes supervision, coaching and discipline — and why consistent video review practices are essential to officer trust and organizational legitimacy.
References
- Police Executive Research Forum. Body-worn cameras a decade later: what we know. Washington, DC: Police Executive Research Forum; 2023.
- National Institute of Justice. Real-time crime centers: integrating technology to enhance public safety. April 2025.
- Bureau of Justice Assistance. Real time crime center information. Washington, DC: U.S. Department of Justice; February 24, 2021.
- Office of Community Oriented Policing Services, CNA. Addressing crime through innovative technology: Chula Vista Police Department’s unmanned aircraft system program. COPS-R1170; 2024.
- City of Chula Vista Police Department. UAS (drone) program.
- Ariel B, Farrar WA, Sutherland A. The effect of police body-worn cameras on use of force and citizens’ complaints against the police: a randomized controlled trial. J Crim Law Criminol. 2018;108(3).
- Yale Institution for Social and Policy Studies. Body-worn cameras: what the evidence tells us. New Haven, CT: Yale ISPS; 2017.
- Campbell Collaboration. Impacts of body-worn cameras in policing. Systematic review.
- Congressional Research Service. Automated license plate readers: policy and operational considerations. Washington, DC: Congressional Research Service; 2023.
- Brayne S, Gilbert DT. Predictive policing and the courts: privacy, data and (dis)efficiency. ScienceDirect. 2025.
- Castañares v. Superior Court, 96 Cal App 5th 596 (Cal Ct App 2023).
- New York City Police Department. POST Act: impact and use policies.
- New York City Police Department. Domain awareness system: NYPD impact and use policy. April 9, 2021.
- City of Chula Vista Police Department. Real-time crime center (RTCC) / real-time operations center (RTOC). Open data page.
- AirData UAV. Chula Vista Police drone program historical flight data.
- New York City Department of Investigation, Office of the Inspector General for the NYPD. An assessment of NYPD’s response to the POST Act. November 2022.
- New York City Department of Investigation, Office of the Inspector General for the NYPD. An assessment of NYPD’s compliance with the POST Act. December 2024.
- U.S. Department of Justice, Office of Community Oriented Policing Services. Implementing a body-worn camera program: recommendations and lessons learned. Washington, DC: Office of Community Oriented Policing Services; 2014.
- Oglesby-Neal S, et al. Body-worn camera activation in police-citizen encounters. Preprint; 2024.
- Cho S, et al. Activate compliance: a multilevel study of factors associated with body-worn camera activation. Police Q. 2021.
- Major Cities Chiefs Association. Body-worn camera review following critical incidents: policy recommendation. May 2024.