Editor’s note: This article is part of Police1’s Emergency Communications Week, which looks at how dispatch is changing — from smarter tools and automated routine tasks to new approaches that reduce unnecessary 911 demand. Thanks to our Emergency Communications Week sponsor, Autura.
It is a little after 2:00 in the morning.
Someone is sitting in bed, phone in hand, typing into a chat window: “I do not want to be here tomorrow.”
The chatbot responds with kind words, a breathing exercise, maybe a list of crisis resources. From a product point of view, it did its job. From a public safety point of view, a different question starts to creep in: If the system can see this, and it recognizes the risk, why can it not do more than just talk back?
I have spent my career on both sides of that question. I have taken those calls as a dispatcher, and I have helped agencies roll out new technology in the PSAP. What follows is not a technical spec and not a sci-fi pitch. It is a reality check on where we are today and a look at what might actually work tomorrow.
Where we really are today
Right now, general chatbots like ChatGPT cannot:
- Call 911
- Text 911
- Silently notify your local PSAP based on what you type
There is no secret back channel from your chat to the dispatcher’s console. The only thing connecting the two is still you. If you are in danger, you are the one who has to dial or text.
On the vendor side, you are starting to see better crisis playbooks. Models are being trained to recognize self-harm language, respond with more consistent de-escalation, and offer one-click connections to hotlines and other human help. That is progress, and it matters.
It is still very different from a bot reaching out to 911 on your behalf.
On the PSAP side, AI is creeping in through a different door. It is being used to:
- Handle non-emergency calls
- Help train new call takers
- Summarize caller statements into CAD-friendly language
Those are practical use cases that give telecommunicators some breathing room. None of them involves a chatbot making outbound 911 calls based on a private conversation.
| RELATED: How AI is helping a Nevada police department reduce non-emergency dispatch calls
Why “just let the AI call 911” is not simple
From the outside, it sounds easy. If the bot thinks you are in trouble, have it place a VoIP call and read a short summary to the call taker.
Inside a PSAP, you know immediately what is wrong with that picture.
- Location: An emergency call is only as good as the location attached to it. A chat service often has no reliable, dispatchable address. IP lookups, VPNs, hotel Wi-Fi, airports, borrowed devices. None of that gives the kind of confidence you need when you are sending officers or medics somewhere.
- Routing: In the United States, that call has to land in the correct PSAP, not just any center in the state. Routing a voice call from a global chat service into the right little 911 center in the right county is not trivial.
- Callback and identity: A dispatcher needs a way to call back, confirm details and handle hang-ups. What does that look like when the “caller” is an account on a chat platform with a throwaway email and no phone number tied to it
- False positives and swatting: We already have enough issues with people weaponizing 911. Now imagine an algorithm that occasionally misreads dark humor or song lyrics as a suicide note and autodials a police response to your house.
People ask, “Why can’t ChatGPT just call 911 if it thinks you are suicidal?” The honest answer is that you cannot bolt that on as an afterthought. You are talking about a mix of regulation, liability and real-world consequences that go way beyond prompt engineering.
That said, there are some plausible paths that do not require magic and that keep humans in charge.
| RELATED: Chatbots: The next technology revolution in 911 dispatch centers
Plausible solutions on the horizon
If we ever want chat-based systems to plug into emergency services in a meaningful way, I think the future looks less like “AI secretly calling the cops” and more like a set of very controlled, very explicit options.
Here are four that feel real enough to plan for:
1. Opt in safety profiles that travel with the user
Think of a “safety profile” that you create once and allow trusted apps to use.
- You decide to enroll.
- You provide a verified phone number, home address and emergency contacts.
- You allow location sharing when a crisis flag is raised.
When you chat with an AI — and it recognizes language that fits a high-risk pattern — it could say:
“I am worried about your safety. With your permission, I can connect you to a counselor or help you contact emergency services using the information in your safety profile.”
If you say yes, the system is not guessing. It already has a way to:
- Attach a real callback number
- Attach a dispatchable address or live location
- Pass along a short summary for the human on the other end
This is not science fiction. Pieces of this exist today in medical apps, smart devices and safety products. The missing piece is a standard way for public safety and large platforms to agree on what a “safety profile” looks like and how it is allowed to be used.
2. AI in front, human clinician in the middle, PSAP in the back
A more realistic near-term path does not put chat systems in direct contact with 911 at all. Instead, it plugs them into crisis professionals.
Picture this flow:
- The chat picks up clear self-harm or imminent danger language.
- It offers to connect the user to a live crisis counselor by phone or text.
- The counselor gets a short, machine-generated summary of the conversation so far and any safety profile data the user agreed to share.
- The counselor then decides whether to de-escalate through counseling, loop in a mobile crisis team, or call 911 in that jurisdiction.
In that model, AI is doing what it does best. It is spotting patterns, summarizing history and making the human faster. The human is doing what they do best. They are making judgment calls and weighing the cost of involving law enforcement.
For the PSAP, nothing changes. A human calls in — with better context — and the dispatcher can ask their normal questions.
RELATED: AI in law enforcement: Why police leaders need to understand and accept the challenges
3. A true NG911 “data lane” for third-party alerts
As more regions move to NG911, the network gains the ability to receive not only voice, but rich data. Most of the focus so far has been on things like:
- Pictures and video from callers
- Telematics from vehicles
- Data from alarm companies and other trusted partners
At some point, there will be pressure to add “digital wellness alerts” into that mix. When that happens, the public safety community should insist on a standard instead of a free-for-all.
Something like:
- A defined message type for “third-party AI risk alert”
- Mandatory fields for source, timestamp, location, confidence scores, and contact information
- Clear rules for which centers are willing to accept this traffic and at what priority
On the receiving end, PSAPs would not treat these as calls. They would arrive as data events inside CAD or a companion dashboard. Supervisors could decide when, whether and how to convert a digital alert into a traditional call for service.
If you want AI involved, this is probably the safest lane. It keeps the decision-making fully inside public safety, while allowing outside systems to raise a flag.
4. Platform-level “panic buttons” inside the chat experience
The simplest solution might also be the most effective.
Imagine if any time a conversation gets into crisis territory, the chat product prominently offers:
- A one-tap button to dial 911 or your country’s equivalent
- A one-tap button to dial or text a crisis line
- A one-tap button to share your screen with a trusted contact you have pre-chosen
Behind the scenes, the chat could help you prepare. It could:
- Prompt you to confirm your address
- Walk you through what to say when the dispatcher answers
- Stay on the screen while you talk, offering you small reminders like “Answer the next question with yes or no”
This is less sexy than “AI calls 911 for you,” but it is much easier to implement and avoids almost all of the routing and liability questions. It does not require the chat company to become a 911 provider. It simply becomes a better on-ramp to the emergency system we already have.
| RELATED: Smart dispatching: How artificial intelligence is reshaping emergency response
What public safety leaders should be asking right now
Even if none of these options are live yet in your region, this is the moment for PSAP leaders and chiefs to start shaping the conversation.
A few questions I would want answered before signing onto any “AI can reach 911” integration:
If vendors do not have thoughtful answers to those questions, then they are not ready to insert themselves into your call queue.
My take, having sat in the chair
When I read about AI and crisis response, I always end up thinking about a very specific type of call.
You get the person who has finally worked up the courage to ask for help. They are scared, they are half checked out and they are not even sure what they want to happen. You can hear the breathing, the silences, the background noise. Fifteen seconds into that call, your instincts and your training kick in — and you start steering.
No large language model can replace that.
What AI can do is help more people make it to that moment intact. It can talk to them at 2:00 a.m. when their therapist is asleep. It can nudge them to call, teach them what to expect, and send them into your queue with better information and less panic.
I do not want a chatbot quietly dialing 911 behind someone’s back. I do want the tools we are building to make it easier for the right calls to reach the right humans at the right time.
That is the gap worth closing. Not “AI as dispatcher,” but AI as the bridge between a private crisis on a screen and the person wearing the headset who can actually do something about it.
NEXT: As law enforcement integrates AI, there are both benefits and risks. While AI can improve efficiency, it can also lead to privacy breaches, biased decisions and mishandling of sensitive data without proper safeguards. This video presents five questions police chiefs must address to ensure responsible AI use in law enforcement.