AI-Enhanced Decision-Making

Digital image of face with binary code running through it

 

With red and blue lights flashing, sirens blaring, and tires screeching, a patrol officer responds to a shirtless man covered in blood brandishing a knife on a public beach and screaming that he wants to kill people. As the rookie officer is driving, she asks her partner about the suspect’s history on calls with the police and about a strategy to de-escalate. Her partner, a Police Artificial Intelligent Assistant (P.A.I.T.), scans department reports and open-source web information to analyze the successful outcome variables that have played a role with not only this individual but other similar incidents. P.A.I.T. provides valuable insight, suggests an area to park for the best approach based on what the drone is relaying, and reiterates the newest legislation on using force regarding edged weapons. P.A.I.T. also gives the officer several real-time pointers provided by veteran officers on successful de-escalation. Is this straight out of a sci-fi movie? Not really. The path to officers communicating with digital partners to seek information and guidance is on the cusp of reality.

There are tremendous opportunities to incorporate artificial intelligence (AI) assistants to augment human decision-making and judgment and make calculated evidence-based strategies to increase police effectiveness. This article will provide insight into recent disruptive breakthroughs in AI technology, a glimpse into the future applications by police agencies, and considerations for police executives to prepare for this technological revolution.

Help Wanted: No Human Experience Required

AI has emerged as a transformative force across industries, including policing. Technological breakthroughs in AI such as Chat GPT have demonstrated their potential value to the workforce and the role AI can play by augmenting human judgment and decision-making in the form of a virtual assistant in the not-too-distant future. AI assistants will enhance officer and public safety and will also be a factor in resolving one of the more critical areas menacing police agencies: staffing shortages.

Catching Up To the Now

The staffing crisis police agencies face is a prevalent, persistent challenge.1 The declining interest in becoming a police officer is occurring at a time where public discontent has led to new legislation and calls to change policing in significant ways. How can we lower the numbers while making them “smarter” in real time? AI assistants can revolutionize policing by augmenting human judgment and decision-making, repurposing officer time for efficiency, capitalizing on institutional memory, and ensuring police agencies remain relevant in an era of rapid technological advancement. These technological changes, however, will present challenges and hurdles before AI enters the patrol car and takes the place of humans who may have been there instead. One of the biggest hurdles is how quickly it will be accepted by those in charge.

There are two categories of police executives: those who are aware ChatGPT is being utilized by some employees and those who aren’t. ChatGPT launched on November 30, 2022. Simply described, it is a chatbot that can carry on a conversation, generate original works like papers and poems, and answer questions in a text-to-text format. Upon asking ChatGPT what its name stands for, the chatbot responds, in part, “Chat Generative Pre-trained Transformer.”2 Furthermore, it can generate business ideas,  translate and summarize text, admit mistakes— an experience akin to having a staff of human assistants. In late 2023, the program progressed from interacting by keyboard to having voice capabilities.3 This change will now allow users to speak to the chatbot and receive responses by voice. This is analogous to shifting from communicating by typed emails to having an interactive, spoken conversation.

There is no substitute for using the program to truly appreciate the current capabilities, at which point a glimpse into what the future holds can be imagined. It is not hard to imagine that ChatGPT has provided a foundation to create an AI assistant that can match, even exceed a human as it analyzes data, forms responses, and provides insights almost instantly. This type of human replacement by AI is already being utilized in the medical profession, which is on the cutting edge of integrating an AI assistant that uses natural language processing and machine learning to transcribe notes automatically during a patient’s interaction with their doctor.4

The rapid rate of change in AI, though, has leaders scrambling to keep up with the implications. In response to AI advancements, U.S. President Joseph Biden issued an Executive Order on October 30, 2023, to “ensure that America leads the way in seizing the promise and managing the risks of Artificial Intelligence (AI).”5 The White House described this legislation as being the most sweeping action ever to protect people in the United States from the potential risks of AI. President Biden stated, “We’re going to see more technological change in the next 10, maybe the next five years, than we’ve seen in the last 50 years. The most consequential technology of our time, artificial intelligence, is accelerating that change.”6 The Executive Order, with its protective guardrails, signals that despite the exciting possibilities of AI, there are legitimate dangers and concerns.

Dangers of Implementing AI in Policing

TV shows from the 1960s like The Jetsons and Star Trek were filled with optimism and enthusiasm for future technologies. Movies such as The Matrix and Bladerunner, however, reflect more modern pessimism about the ways computers may seek to control us. There are significant areas of concern about the increasing reliance on AI and how to ensure the pace of change is commensurate with community expectations. This sentiment was captured well in a quote by data science Professor Paul Pavlou, who wrote in 2018,

For the time being, appropriate IoT designs [AI and machine-learning] should maintain a reasonable level of human control and oversight and give mankind a chance to get acquainted with delegating control to machines.7

As AI becomes part of policing, it will understandably be questioned by the public, both on ethical grounds and for privacy concerns. The future capabilities of an AI assistant with access to criminal databases and facial recognition could result in technology creating its own probable cause to prompt a virtual Terry stop conducted by humans at the AI’s behest. These concerns might be allayed with added transparency about the ways the police are using AI, even as its use creates an ability to track ever more data and help guide policing in new ways. A significant issue still unresolved is how AI interprets the data it examines, especially as it relates to acting with a virtual bias.

A recent bill that passed by California legislature, AB 331, requires employers to disclose their use of automated decision-making tools, such as AI, explain their purpose, and how they are being used. While bias in humans is widely known and accepted, the concept of bias in technology, specifically AI assistants, is not. In a 2022 interview, Google CEO Sundar Pichai stressed that biases have to be removed from training data for AI and that developers of AI technology should be representative of a diverse society.8 In his interview, he stated,

We have to make sure it doesn’t have disparate or harmful effects on any particular group based on race, gender, caste and so on. So I think it’s really important as we are developing technology, particularly AI to involve outside groups, and researchers and to have the right regulatory frameworks to make sure we are developing this responsibly.9

Pichai’s emphasis on avoiding harmful effects on any particular group could not be more relevant to policing at any other time in history than today.

Harnessing the Unlimited Potential

Despite concerns surrounding the unknown dangers of AI, one of the most promising developments would be to incorporate AI assistants to access and correlate information rapidly for officers in the street. Information such as call history at a location, department policy, state law, case law, and past officers’ experiences are all examples. This capability empowers officers with a wealth of data at their fingertips, enabling them to make more informed and timely decisions. With AI, officers will also spend less time on routine tasks like report writing, allowing them to focus on higher-value activities and increase operational efficiencies. Quite literally, AI is an answer to staffing challenges without hiring additional people.

The AI assistant’s role can range from providing information to augmenting a decision—up to providing a decision as needed in an emergency. The balance of the ways it may be used will have a major implication on how policing is conducted. What has traditionally been a profession rooted in human decision-making, using instincts, experience, emotion, and risk analysis, may transition to using technology —algorithms, statistics, databases, and predictive formulas— to detect crime and arrest perpetrators. This transition would effectively remove what is most revered and celebrated, while also most villainized and criticized aspect of the profession—a human’s decision.

Get Started Today: Create a Solid Strategy to Implement AI in Your Agency

AI is here now and growing. The decision to incorporate it into police operations, then, is a matter of when, not if. To do so, there are key criteria to consider when developing a plan to implement AI in any department.

One of the most efficient ways to do this is to assemble a core group of agency leaders, experts in AI, legal representatives, and community members who can all provide input. These stakeholders form a futures technology working group and should use their different experiences and perspectives to anticipate challenges and opportunities. These discussions can lead to shaping standard operating procedures, policy decisions, and identifying the best balance between AI and human augmentation for the unique communities served.

An example where this type of advanced strategic collaboration has been successful is in Mountain View, California. In a 2023 interview with the author, Mountain View Police Department Captain Saul Jaeger noted that, when autonomous vehicle (AV) operations on city streets were in the early stages, the Mountain View Police Department held ongoing meetings with Google representatives. These meetings served to share anticipated challenges from diverse professional perspectives to mitigate safety issues for the public. The outcome of these meetings served to align a mutual interest in public safety, including ensuring there were mutual points of contact for emergency situations should they arise. The police also provided concerns regarding potential crime scenarios involving AVs and the ways they could be mitigated.10 Sharing these unique perspectives by policing professionals can help developers avoid blind spots for well-intentioned technologies. This collaborative multidisciplinary team example provides a blueprint to incorporate technologies such as ChatGPT into policing and serves as a starting point at looking to future technological leaps.

Public engagement and transparency are also crucial. Advisory boards or civilian oversight panels can facilitate early buy-in and transparency, helping the police to stay informed about legislation that impacts AI and to seek public input proactively. Preparing for technology failures is essential, as overreliance on AI can leave officers ill-prepared for situations when technology is unavailable. Data security and backup plans are crucial components of this preparation. The use of AI in decision-making may become a focal point in legal proceedings, necessitating close collaboration with legal stakeholders to shape policies and procedures.

Conclusion

As Alan Kay of Xerox said, “The best way to predict the future is to invent it.”11 The profession of policing will certainly adjust and find tremendous value in an AI assistant that can learn, teach, access information, and help guide an officer to making the most informed decisions possible. Officers will be more efficient with AI, no different than doctors who enhance their effectiveness through its use. As veterans retire and the police workforce shrinks, the institutional memory of a police organization that would otherwise be eroded by resignations and retirements can be preserved and passed on. This institutional memory often contains lessons learned through success and failure; it is a form of wisdom that improves the legitimacy of the profession. In 2021, Nobel Prize winner Daniel Kahneman,  the world’s leading expert in human judgment who spent a lifetime documenting its shortcomings, said “Clearly AI is going to win. How people adjust is a fascinating problem.”12 Whether or not policing will be among the winners is yet to be determined. 🛡

Notes:

1Police Executive Research Forum (PERF), The Workforce Crisis, and What Police Agencies Are Doing About It (Washington, DC: PERF, 2019), 7.

2OpenAI, ChatGPT.

3OpenAI’s ChatGPT Can Now Have Voice Conversation With Users,” Business Insider, November 2023.

4E. Topol, “Doctors, Get Ready for Your AI Assistants,Wired, February 2, 2023.

5The White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” October 30. 2023.

6Deepa Shivaram, “AI Oversight: Biden Signs Executive Order On Artificial Intelligence,”  Morning Edition, NPR, October 30. 2023.

7Paul A. Pavlou, “Internet of Things Will Humans Be Replaced or Augmented?” NIM Marketing Intelligence Review 10, no. 2 (2018): 42–47.

8Shruti Dhapola, “‘Think of AI as an Assistant…Will Impact All Fields,’: Alphabet’s Sundar Pichai,” The Indian Express, December 21, 2022.

9Dhapola, “‘Think of AI as an Assistant…Will Impact All Fields,’: Alphabet’s Sundar Pichai.”

10Saul Jaeger (captain, Mountain View Police Department, California), interview with author, November 15, 2023.

11Chunka Mui, “7 Steps for Inventing the Future,” Forbes, April 4, 2017.

12Daniel Kahneman, “Clearly AI Is Going to Win. How People Are Going to Adjust Is a Fascinating Problem,” interview by Tim Adams, The Guardian, May 16, 2021.


Please cite as

Kenneth Kushner, “AI-Enhanced Decision-Making,” Police Chief Online, April 17, 2024.