By John Guilfoil, MA, APIO
On July 2, 2025, the Westbrook (Maine) Police Department acknowledged it had mistakenly posted an altered, AI-generated photo to Facebook following a drug seizure. An officer had used ChatGPT to add a department badge to the image, unaware the tool would modify other visual elements. The resulting distortions — including garbled text and missing objects — sparked public scrutiny. Initially denying AI involvement, the department later reversed course and issued a corrected post with the original photo, along with a commitment to halt AI use in its social media content.
The incident is a cautionary tale for public safety agencies adopting generative AI without proper oversight. As AI tools become more accessible, departments must move beyond informal use and establish ethical guardrails. The article below outlines why that’s essential — and how to do it — before a minor communication error becomes a major breach of public trust.
| RELATED: Maine PD apologizes after posting AI-edited drug bust photos on social media
The stakes of AI missteps in public safety communication
From spreadsheets to search engines, technology has long promised to make government and policing more efficient. Now, generative artificial intelligence (AI) is reshaping how we think about communication and customer service — and some are even discussing its use in emergency response. But as public agencies rush to adopt this latest technology, the stakes have never been higher.
Global investment in AI is expected to top $1 trillion by 2030. This makes it impossible to ignore, but for government, public safety and education leaders, this transformation brings not only possibilities — but profound risks.
This is why police departments, which have finally begun to fully adopt media relations and social media policies into their SOPs, need to create governing documents for the use of artificial intelligence software — creating a human firewall to govern and limit its use — before it’s too late. Accreditation standards need to include AI policy requirements, for example.
At JGPR, a public relations and public information consultancy that serves more than 500 police departments in 17 states, our approach to AI has been cautious, limited and transparent. AI is never used as a client-facing or constituent-facing work product. However, the software can be used to analyze datasets, draw graphs, generate ideas for infographics, proofread content against Associated Press style and generate story ideas from source materials. In other words, we want AI talking to your PIO — but we always want your PIO talking to your residents, constituents and the news media directly.
| RELATED: Can AI fix 911’s biggest problems — or make them worse?
A new kind of risk
Public trust is the foundation of all police, fire and government communication. That trust erodes when people feel like they’re being spoken to by a machine, not a person. The frustration is obvious when a customer is stuck with a chatbot instead of an airline representative or big-box store associate. In government, that same disconnect can undermine confidence in public institutions, especially when constituents are seeking answers or services that directly impact their safety, well-being or quality of life.
There’s enormous potential for AI to improve workflows in government, but the moment a taxpayer believes they’re being misled or kept in the dark about who — or what — is communicating with them, trust is lost. And if this is ever done in a sneaky way — a way that doesn’t clearly disclose the use of AI — it can cause lasting reputational harm.
When crafting an AI ethics policy, keep in mind some clear ethical lines in the sand. Among them, consider these:
- AI must never be the sole author of constituent-facing communication
- AI must never be used to communicate with the public without human oversight
- Constituents must always be notified whenever AI is being used
Drafts, not decisions
In creating a sample “Statement of Principles” for public agencies, JGPR does not call for a ban on generative AI. In fact, the agency encourages the use of large language models (LLMs) like ChatGPT, Copilot and Gemini to generate “first drafts” of content — provided that a human being reviews and edits it.
We should draw a hard line, however, between support tools and decision-makers. Artificial intelligence cannot and must not replace human-to-human communication or judgment in police, fire, EMS or government.
With that take on artificial intelligence, consider dividing your AI usage policies into “Must,” “May” and “Must Not/Must Never” — and then fill in the blanks. Here is a proposed set of AI usage criteria for police departments, which are offered freely for agency use or as a starting point to redraft, revise, add, subtract, etc., to fit your needs.
We must:
- Utilize the software with transparency. Our department must declare to our constituents when AI is being used, through a disclaimer statement, label or watermark
- Verify all claims, statistics and facts offered by AI. A police department always verifies information, and we must verify all information offered by AI
- Be aware that the software is developed by private entities, including large for-profit corporations and foreign organizations. Utilizing one of these products comes with the understanding that it is programmed and trained by a third party who may not have the best interests of our constituents in mind. Data and responses given by the software will be affected by the programming and training provided by its creator. All data entered into the AI will subsequently be available to the LLM. Do not enter confidential information into an AI
- Understand that the technique of reinforcement learning from human feedback (training the software) is important and useful to ensure more effective utilization of this suite of software — but no amount of training or feedback will make an AI/LLM product provide absolutely perfect outputs
- Understand that no software product may be “ready for live” right out of the box. We must commit to learning how to use the software — training it in the methods and priorities of our agency, not just those of a third-party corporation — before deploying it in a live environment
- Consistently train ourselves and our staff on this ever-evolving technology. One-time training is not sufficient
- Assign responsibility to a human for all errors made by LLM software. The human user must be designated as the responsible party. All errors, corrections or omissions are the responsibility of the human user. Human officers and department civilian employees are held accountable for the work they produce using AI
We may:
- Utilize the software to create first-draft content. A first draft, subject to editing, fact-checking and copyediting by a human employee, makes LLM-generated content more acceptable for constituent receipt
- Instruct the software to analyze datasets. Survey results, financial reports and ballot reports create large datasets. The software may be used to give the government user analysis and to create data visualizations. We assume responsibility for the analysis, and it is incumbent upon us to verify and spot-check all claims made
We must never:
- Utilize the software to deliver communications directly to constituents. We must never post AI/LLM content directly onto websites, social media, newsletters or other platforms without human review, editing, fact-checking and approval. Human employees of the government agency must always be the last eyes on content before it is published
- Utilize the software in critical, real-time public safety interactions. AI/LLM systems must never substitute for trained professionals in situations where urgency, empathy and accountability are paramount. AI may assist behind the scenes — for example, analyzing data or offering internal prompts — but it should never serve as the face or voice of urgent public communication
- Utilize the software to communicate with unknowing constituents. The software may be offered as a fast “constituent services option” in lieu of waiting for a human response — but constituents must always be aware when they are communicating with software or a real government employee
- Utilize the software as a replacement for human writing in police, fire and EMS incident reports. Our courts of law are run by human beings at all levels. Reports and items that may end up as narrative evidence in court should be written by human beings only. In cases where AI/LLM software generates reports automatically for things like surveillance or body-worn cameras, a human being must review and sign their name to — and assume responsibility for — the content
- Violate the intellectual property of another party. Government communicators often create content that is automatically entered into public domain. We have an obligation to safeguard the works of others, to not violate their intellectual property rights and to avoid taking actions or publishing LLM materials that inadvertently violate copyright or trademark. Government violations of copyright and trademark can result in cascading effects on the original party, causing great damage. Take great care to check LLM-provided materials for copyright or intellectual property violations before publishing
A basic framework and policy is not an attempt to hold back the tide. Rather, it is a way to gently wade into a completely new world of technology-driven communications. There are already AI-powered tools being marketed to streamline constituent interactions, from answering routine questions to helping manage service requests. But AI is not a plug-and-play solution — and it may never deliver the accuracy, transparency and public trust required without ongoing human oversight and clearly defined ethical guardrails.
It is important to remember that, in policing, we are not making products or selling goods. We sell public safety and security — and that is not something we can safely “beta test” in the real world.
About the author
John Guilfoil is a public information officer and founder of JGPR, a Boston-based communications agency that specializes in providing public relations, media relations, website design and crisis support to police and fire departments and municipal agencies. He teaches public relations and journalism at Northeastern University and Lasell University, and he is the author of “Public Relations: A Professional Approach.”
| WATCH: Generative AI in law enforcement: Questions police chiefs need to answer