Trending Topics

Ethical considerations for using ChatGPT for law enforcement

Make sure you use AI tools ethically and within the law

GettyImages-817338718.jpg

As police agencies, we must proactively ask how we can use AI ethically and within the boundaries of current state laws instead of waiting until it is too late.

Getty Images

Author’s note: This article was written by giving prompts to ChatGPT to get real AI responses to some difficult questions.

Artificial intelligence tools like ChatGPT and Google Bard have the capability to revolutionize how police operate and complete investigations. According to recent computer science research, the scale of AI computations doubles every six months. This far outpaces Moore’s Law, which tells us the number of transistors in an integrated circuit doubles roughly every two years.

That means the technology we thought was decades away is now at our doorstep. As police agencies, we must proactively ask how we can use AI ethically and within the boundaries of current state laws instead of waiting until it is too late.

In November 2022 OpenAI released ChatGPT, a powerful autoregressive language model chatbot. In February 2023 Google released its version of an interactive chatbot. These tools can be used to find responses to some of the most challenging police questions, such as how to increase police morale and build community trust. But this ease of use, the quick responses and a general lack of oversight can lead to many issues for law enforcement.

Before you decide to jump into using AI chatbots for police purposes, please pause and think of the following:

1. AI chatbots find information through context, not just words

AI chatbots like ChatGPT quickly gather data using open sources. The system learns from previous questions and answers, which allows it to answer faster and more accurately the next time the question is asked. However, AI chatbots don’t just gather the data; they analyze it contextually. That means the user needs to give context to the question to get the best answer possible.

Most AI chatbots are designed to generate human-like responses to text prompts, hoping to advance natural language processing and machine learning. But an unintended consequence is the ability to have deeper and more realistic conversations with a computer instead of a person.

2. AI chatbots can be manipulated

This is where most of the ethical issues arise in policing. One of the most important things to remember is that chatbots can be manipulated. A user can change the context behind the question to help find the information they’re looking for. This is the same theory as rephrasing questions to your kids to elicit the answer you want, or when police officers intentionally ask leading questions to elicit confessions instead of trying to find the truth.

Open-source chatbots are supposed to have protections in place to prohibit the release of private information. I asked ChatGPT to provide the personal information of one of my friends, and it replied: “I’m sorry, but as an AI language model, I don’t have access to personal information of individuals unless it has been made publicly available through reliable sources. Therefore, I cannot answer your question.”

However, trained prompt engineers can slowly coach the chatbot to provide the information they seek by carefully choosing words and phrases. Remember, AI chatbots respond to context, not just words. Therefore, users can trick the AI into finding even the most secure information by manipulating the context. Tricking AI into finding private information is highly unethical and in most cases a violation of Fourth Amendment rights.

3. AI chatbots may be biased

Chatbots may be biased based on their design, development and training. AI learns from the data the trainers provide. If the data is biased, the chatbot will also be biased. Trainers of ChatGPT understand the potential for biased responses, which is why there is a clickable pop-up warning that says, “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content.”

However, all humans have implicit biases. Therefore, the user must understand their biases before using AI for any police decision-making. ChatGPT also recommends that “police agencies take steps to ensure that the data is representative and not biased against any particular group.”

Police can easily avoid biased-based claims by focusing on criminal behavior, not demographics or a single race, religion, or sex. Police can also mitigate claims by tracking AI conversations and occasionally auditing for biased language.

4. The public demands AI chatbot transparency

One of the biggest concerns in the police application of AI is transparency. The public demands transparency in nearly everything police do, including when AI tools are used in investigations. Currently, there are no laws that mandate the disclosure of AI to the public, so each agency will have to determine the best method for their community.

Note: While no laws regulate the use of AI chatbots in police work, some cities have passed laws and ordinances that regulate or ban specific AI tools like facial recognition. Always follow your city’s laws and regulations.

5. AI chatbots are not secure

Open-source AI chatbot programs are currently not secure. ChatGPT and Google Bard have security and encryption built into their systems, but employees can see and review everything the user does on the system. In addition, AI trainers store and review user conversations to help train the AI. As ethical users of AI technology, be cognizant of the information you provide these chatbots and avoid providing the system with personal identifying information such as dates of birth, social security numbers, or account numbers.

Conclusion

I am a big supporter of the ethical use of AI in police applications. However, we must take the time to have those hard discussions on what ethical AI use in policing looks like. In closing, I asked ChatGPT if it is a good idea to use AI technology in policing and if AI is good or bad. Its reply:

AI is not inherently good or evil. AI is a tool that can be used for both positive and negative purposes, depending on how it is designed and deployed. Just like any other tool, the ethical implication of AI use depends on how humans use it. It is important to develop and implement ethical guidelines and regulations to ensure that AI is used for the betterment of humanity and not to cause harm. As AI technology advances, it’s crucial to have ongoing discussions and debates about the ethical implications and potential consequences of its use.”

Let’s use AI ethically, morally and within the scope of the law to better humanity and improve policing.

Additional Police1 resources on artificial intelligence

Joshua Lee is an active-duty police sergeant for a municipal police department in Arizona. Before being promoted, Joshua served five years as a patrol officer and six years as a detective with the Organized Crime Section investigating civil asset forfeiture, white-collar financial crime, and cryptocurrency crimes.

Joshua is a money laundering investigations expert witness and consultant for banks, financial institutions, and accountants. He is also an artificial intelligence for government applications advisor and researcher.

Joshua holds a BA in Justice Studies, an MA in Legal Studies, and an MA in Professional Writing. He has earned some of law enforcement’s top certifications, including the ACFE’s Certified Fraud Examiners (CFE), ACAMS Certified Anti-Money Laundering Specialist (CAMS) and the IAFC’s Certified Cyber Crimes Investigator (CCCI).

Joshua is an adjunct professor at a large national university, and a smaller regional college teaching law, criminal justice, government, technology, writing and English courses.