Policing in an AI-Driven World

 

When OpenAI released ChatGPT to the wider public in November 2022, the world was in shock. While the company’s latest large language model (LLM) was the result of numerous years of research and significant resources invested into its development, the release of ChatGPT landed in society like a grenade. As it exploded with the fastest growing user base in human history, the widespread availability of a cutting-edge AI model woke a lot of people up with regard to just how good AI had become.

Enabled by soaring amounts of data and increases in computing power, as well as scientific progress in the field itself, the past year marked an unprecedented breakthrough in the sophistication and proliferation of AI models. The resulting rapid growth of the AI industry, coupled with the integration of more and more AI services into private and public sector services, has fueled discussions relating to how AI can and should be used for various purposes. Regulatory initiatives, such as the EU’s AI Act, are symptomatic of this emerging dichotomy.1

Europol, the European law enforcement agency, has found itself right in the center of this shift. Considering the drastic increases in the volume and velocity of data in the context of criminal investigations, AI has become indispensable in supporting the work of investigators in the fight against serious organized crime and terrorism. At the same time, Europol voices the views of the European law enforcement agencies it serves as they need to demonstrate, more than ever, that technological innovations—such as AI—are used responsibly.

This seismic AI shift has given rise to a fundamental conflict: how can the police strike the right balance between making Europe safer with the help of AI, while ensuring the responsible use of the capabilities offered by emerging technologies?

AI at Europol

Europol uses AI to support high-profile investigations to extract and classify information from an increasingly large number of data sources. As such, analysts supported by the data science team use a set of AI models to classify images by automatically assigning tags to millions of pictures or to extract named entities from text, including the names of people, locations, phone numbers, or bank accounts. Other AI models allow analysts to search for images of cocaine bricks with a specific logo or detect useful information in pictures, like the number on the door of a shipment container or the name and date of birth from a picture of a badge.

Once this is done, the analysts can validate the AI-generated information and start looking for leads, such as pictures of cocaine and related text messages with container numbers and locations in port cities that are known for cocaine trafficking. Analysts can then narrow down their search, cross-check the information with other Europol databases, and begin to build a chart connecting different suspects and their activities.

These are decisions that require specific expert knowledge and, for this reason, will always be performed by (human) analysts. Classifying millions of pictures and extracting container numbers, however, is tedious, and previously time-consuming, work that AI can support to free up scarce human resources that can be put to better use elsewhere. The key is, thus, not only to have AI-ready data, but also to facilitate their analysis by investigators with the help of AI models.

Innovation for Cutting-Edge Policing

While AI-driven applications already play a crucial role in Europol’s work, the technology is developing rapidly, as the aforementioned recent breakthroughs in generative AI forcibly demonstrated. To leverage the full potential of AI as it continues to advance, the policing profession needs to remain on the forefront of research and innovation.

Creating strong and effective networks is a core foundational principle in this regard. Through active collaboration with its Member States, Europol, under the umbrella of the European Clearing Board for Innovation, helps the European policing community pool resources and build on one another’s specialities. Together, experts work on the development of cutting-edge AI tools for addressing challenges such as computer vision, geolocation, and natural language processing. The results of these efforts are made available to all EU Member States via Europol’s Tool Repository, a collaborative online platform offering more than 30 investigative tools free of charge.

The widespread emergence of sophisticated LLMs offers a fitting illustration of the unique opportunities innovative practices can enable in the fast-paced field of AI. Upon the release of several open-source LLMs for the purposes of experimentation, Europol’s Innovation Lab reacted quickly by working with several European  police agencies to launch a technical exploration of how this technology could be used for policing purposes. Europol developed and evaluated a Retrieval Augmented Generation (RAG) prototype built by the Innovation Lab. RAG combines LLM (like ChatGPT from OpenAI or Llama from Meta) and embedding models to take the most relevant extracts from a set of documents and build an answer that responds to a prompt. In essence, the user  can ask questions and the LLM will act as the co-pilot to find the best answer in the provided documents.

The “RAGathon” was organized into two streams. The first stream sought to improve the prototype’s performance and add more features like an improved user interface, hallucination checker, and improved case management. The second stream was the evaluation stream. Europol assessed the quality of the prototype using real, nonsensitive internal Europol documents. No personal data were used for the exercise. The goal of the stream was to test the limits of the prototype and assess its potential use for operational and general support functions of Europol. The evaluators asked a question (via inserting a “prompt”), and the answer appeared on the screen, along with the references that led to it. Then the experts scored each answer and the identified references on a dedicated part of the interface. Every prompt and answer, along with the scores, were recorded and later analyzed.

Use cases for LLM applications purposely built to support police work will range from strategic and criminal analysis to operational support. For instance, with the help of co-pilot LLMs, like those examined in this experiment, it will be possible to analyze the structure and activities of an organized criminal group by processing a large number of documents to identify specific entities, connections, patterns, and trends. Another example includes coupling speech-to-text tools with LLMs to transcribe and then analyze large amounts of video files— by using few-shot prompting to teach the LLM how to detect harmful or violent content, investigators can easily ask the LLM to perform classification of the audio content of the video files and then prioritize those that need immediate processing by an analyst.

Accountable Use of AI

As AI applications are continuously being integrated into modern policing, concerns regarding their ethical, legal, and societal implications have been raised.

Responding to these concerns, the EU Commission proposed the Artificial Intelligence Act (AIA) to balance AI innovation by the police with the protection of fundamental rights and societal values.2 The AIA emphasizes accountability and transparency as essential pillars in democratic societies. These principles ensure that power structures serve community interests and institutions operate ethically to maintain public trust.

“AI can bring tremendous benefits to society, but it also has the potential to give rise to unprecedented threats.”

The AP4AI (Accountability Principles for Artificial Intelligence) project, a joint endeavor of Europol and the Centre of Excellence in Terrorism, Resilience, Intelligence and Organized Crime Research (CENTRIC) and members of the EU Innovation Hub for Internal Security (CEPOL, Eurojust, EUAA, and EU FRA), stands as a testament to this effort.3 With its results now delivered, the project provides a comprehensive, empirically grounded framework geared toward ensuring AI accountability in these crucial sectors.

Beyond principles, AP4AI provides an innovative toolkit—the Compliance Checker for AI (CC4AI), tailored to assist EU internal security practitioners in meeting the requirements of the AI Act. This step-by-step guidance has been designed to support users implementing AI for policing purposes to evaluate if existing or future applications, whether self-developed or procured, meet the criteria set by the new regulatory framework. Currently undergoing operational evaluation, this tool engages a diverse cohort, including police, research agencies, and EU-funded projects. This evaluation focuses on its suitability, scalability, and adaptability within the EU context.

Explainability and the Research and Innovation Sandbox

A crucial consideration in the accountable and effective use of AI models is the need to be aware of their limitations and uncertainty by being able to explain their outputs. By turning AI models into explainable AI (XAI), the foundations for a more transparent, robust, and trustworthy human-centric AI are built, which, in turn, will lead to increased fairness and accountability.

Reaching these goals is paramount for Europol and the European policing community as a whole because, in the coming years, Europol will be using more and more AI-powered tools in investigating crime and must be using explainable tools so as not to undermine the criminal justice system. It is also a legal requirement for Europol, as stated in the recently amended Europol Regulation and in the future AIA.

Today, XAI is an ongoing area of research, not a product that can be bought. The Europol Innovation Lab works on several initiatives and projects to promote XAI in the policing community.

An increasingly relevant operational challenge in this domain is the processing of large and complex data sets with the help of AI models. The Amended Europol Regulation of 2022 mandated Europol to conduct research and innovation projects, including the development, training, testing, and validation of algorithms, for the development of specific tools for the use by police authorities.4

A first use case involves the training of an AI model on child sexual abuse material (CSAM). The fight against child sexual abuse and exploitation is among the most serious in Europol’s mandate, as the volume of CSAM distributed on online platforms continues to grow rapidly, with devastating consequences for its victims. The manual analysis and detection of CSAM by experts is not only a time-consuming but also an emotionally challenging task. An AI model capable of classifying relevant material in an automated and accurate manner can help make child abuse investigations more efficient and lead to the safeguarding of more children worldwide. Europol’s Research and Innovation Sandbox allows for the fine-tuning of this model on real operational data and is expected to bring significant improvements to what is currently available.

In this context, the upcoming Sandbox will be the flagship service offered  by the Innovation Lab to enable police officers to embrace innovation and make the most of key emerging technologies. The possibility to train AI models on operational data, in a compliant framework, promises to be a game changer for European police organizations, particularly at a time when the volumes of investigative data continue to grow at a rapid pace.

The Research and Innovation Sandbox will not only facilitate the development of new AI models that address concrete operational use cases but also help detect biases and apply explainability methods. This capability will consequently improve the accuracy of AI models used by European police agencies and directly contribute to more accountability and transparency.

Criminal Misuse of AI

Progress being made in the area of AI undoubtedly offers significant benefits to the policing community. At the same time, criminals have also proven to be early adopters of new technologies. As such, the increasing sophistication and availability of cutting-edge AI models has led to a growing issue of criminal abuse. The availability of high-end LLMs, image generators, and deepfake models means that it is now possible for even technologically unsavvy threat actors to facilitate a vast range of crimes with the help of AI. This is a significant challenge for the police going forward.

Europol’s Innovation Lab’s Observatory monitors and assesses in detail the impact that key emerging technologies have on policing, including their potential for criminal abuse. As such, a major focus has been placed on the need to better understand the impact LLMs are going to have on the work of the police, and what criminal activities these tools might enable. To test the potential of LLM’s for criminal abuse, the Observatory gathered key operational stakeholders from across the crime areas in Europol’s mandate, ranging from cybercrime and financial crime to drugs and counterterrorism, to test the system and identify potential vulnerabilities or abuse scenarios.

In the resulting assessment on the impact of these AI models on policing, the Innovation Lab concluded that publicly available tools can facilitate or even enhance the vast majority of criminal and terrorist acts, ranging from cybercrime and fraud to grooming, online anonymization, and the production of illicit substances. These findings are captured in a dedicated report that aims at raising awareness of the issues identified as part of this work, as well as to making concrete recommendations to mitigate potential threats.5

Understanding and, subsequently, fighting the dangers of criminal abuse of AI, however, must remain a continuous effort to keep pace with rapid developments in the area. New trends, such as the availability of dark LLMs—customized chatbots tailored to facilitate criminal abuse—for instance, moved from a hypothetical future prediction in Europol’s assessment to an operational reality in a matter of months. At the same time, the threat goes beyond just the abuse of LLMs. Image generators have been found to facilitate the production of CSAM and propaganda, while deepfakes are being integrated into more and more threat actors’ modi operandi.

What will happen when this trend continues, and more threat actors gain access to even better models?

As the technology fueling these models continues to improve at breakneck pace, the dangers stemming from their abuse are set to increase as well. Going forward, it will become increasingly critical for the police to closely monitor these developments and to proactively engage with key stakeholders to prevent the abuse of these technologies. AI can bring tremendous benefits to society, but it also has the potential to give rise to unprecedented threats. It is the role of the police to lead the charge to ensure these dangers do not tip the scales.

Conclusion and Outlook

Since the beginning of the global AI hype in November 2022, the field has continued to progress at an extremely fast pace. Competing market forces push technological development in a race to AI supremacy, resulting in new models with ever-increasing capabilities being released more and more frequently. At the same time, calls for regulation and the responsible use of technology are becoming louder and reflect the major societal shift driven by AI.

If the policing community is to stay relevant in an AI-driven world, it is crucial to grapple with these emerging issues proactively. That means embracing the benefits AI can offer to make policing more efficient and effective. At the same time, adopting innovative technologies must go hand in hand with addressing questions of how the police can demonstrate a responsible use of AI that is based upon key accountability principles aimed at safeguarding fundamental rights.

As the European law enforcement agency, Europol embraces being at the forefront of many of these challenges. This includes helping Member States work together on making the most of technological progress by working closely together to develop concrete solutions addressing real operational needs. But it also means closely monitoring potential threats, as well as seriously considering concerns regarding how the police use AI. If policing authorities are to stay relevant, this means understanding that accountability is not a box to tick, but a key success factor that can help European police agencies become better at fulfilling their mission of making Europe safer. d

Notes:

1European Parliament, “EU AI Act: First Regulation on Artificial Intelligence,” updated December 19, 2023.

2European Parliament, “EU AI Act.”

3Centre of Excellence In Terrorism, Resilience, Intelligence and Organised Crime Research (CENTRIC), “About.”

4Europol, “Europol’s Amended Regulation Enters into Force,” news release, June 28, 2022.

5Europol, ChatGPT: The Impact of Large Language Models on Law Enforcement, Tech Watch Flash Report from the Europol Innovation Lab (Luxembourg, BE: Publications Office of the European Union, 2023).


Please cite as

Europol, “Policing in an AI-Driven World,” Police Chief Online, April 24, 2024.