There are two related, yet distinct emerging terrorism trends growing in parallel: artificial intelligence (AI) and 3D printed weapons. First, the ubiquity of generative AI has led to its adoption by malign actors. In fact, in 2023, the Islamic State (ISIS), known for their use of technology and adaptability, published a guide for their members “on how to securely use generative AI.”1 There is a growing concern around lone actors using AI chatbots for instructional guides to build a bomb or plan an attack, and they have already adopted generative AI to create fake news anchors and rapidly disseminate propaganda.2 The second emerging trend is the development and use of 3D printed firearms, also referred to as “ghost guns.” These weapons are unregistered, untraceable, and can be sold and assembled without oversight. Their appeal to terrorists and other bad actors includes the capability to evade existing federal restrictions and the availability of designs ranging from pistols to AR-15s.3 In 2025, for example, an Oklahoma man was charged with material support for al Qaeda by attempting to send 3D printed guns to the organization. Court documents reveal that the perpetrator attempted to ship over 100 3D printed parts for machine guns, handguns, and drones to use in terrorist attacks.4
Both trends represent common, low-cost options that require minimal expertise and resources for nefarious actors to adopt. Although distinct in nature, the opportunity for terrorists to leverage the novel capabilities of emerging technologies calls for new prevention tactics. One such tactic is to “meet them where they are” by leveraging technology to respond to emerging problems quickly and effectively.5 Consistent with this, the U.S. 2025 Winning the Race: America’s AI Action Plan details a call for accelerated adoption of AI in government and defense. The recommended policy action calls for the identification of knowledge, skills, and abilities that enable the workforce to effectively employ AI-enabled capabilities.6 More directly, the prioritization, collection, and distribution of intelligence on the use of AI can improve security implications greatly. As such, several ways for police agencies to integrate AI into their current workflows, in concert with several critical organizational changes, are outlined below.
Trend One: Artificial Intelligence
What Is AI?
AI broadly refers to a technology that can infer patterns from external data sources, make predictions, and draw conclusions, often defined in reference to human intelligence.
Why This Matters Now
Terrorist actors are using off-the-shelf AI tools to produce and distribute propaganda faster, in more languages, and with higher operational value. Their goals are not new, but the speed, cost, and reach are.7 As of now, ISIS is primarily using AI as a force multiplier in propaganda and recruitment: generating fake personas, multilingual content targeted toward the West, and novel media to radicalize supporters after attacks. The emerging risks include the expanded use of AI-aided drones, autonomous target recognition, and low-skill operations from terrorist actors. Although there is no evidence yet of ISIS using AI-guided drones, the technology is available. Given their past use of drones, AI algorithms could enable autonomous navigation, target recognition, and real-time decision-making.
A notable example of this threat includes ISIS’s AI-generated news anchors, presenting content produced in English and other languages, to recruit followers in the West. For example, one clip shows a digitally fabricated presenter calmly praising the concert hall bombing in Moscow, Russia, which claimed 145 lives.8 Moreover, AI chatbots can operate continuously on multiple platforms, analyze behavioral triggers, and adapt responses based on the individual’s ideological inclinations or vulnerabilities. The AI models can be trained to analyze how users interact with content, allowing extremists to adapt and more precisely target potential recruits. The insights derived from social media behavior can be used to create customized radicalization methods.
Operational Impacts for Policing
- Shorter warning and detection time. Claims and counter-narratives can appear within hours of an incident, often in local languages and on closed platforms.
- Higher volume and complexity. Extremist messages can be communicated in more languages and formats (e.g., text, video, voice clones), while containing a higher volume of falsified information.
Trend Two: Additive Manufacturing
What Is Additive Manufacturing?
Additive manufacturing refers to the process of building products via 3D printing, most commonly using computer-aided design (CAD) software. Although the development of CAD files requires technical expertise, a reduced level of expertise is needed for 3D printing. This kind of manufacturing allows terrorists to bypass traditional supply chains by concealing weapon development and traditional registration.
Why This Matters Now
The most imminent threat of 3D printed firearms or ghost guns, and related to terrorism specifically, is the supplying of extremist and terrorist groups with 3D printed firearms. As described previously, in September 2025, a man in Tulsa, Oklahoma, was charged for attempting to distribute 3D printed firearms, drones, and switches.9Those making or purchasing 3D printed firearms are doing so for the purpose of circumventing gun laws that would prohibit gun ownership due to age, criminal history, or mental illness, or to take advantage of their anonymity and low cost. In a notable example, an 18-year-old, in a span of only six days, purchased an online gun-building template from an organization that did not require age verification or a background check, produced a gun, and committed suicide.10
Operational Impacts for Law Enforcement
- Standard tracking procedures fail. The 3D printed firearms do not follow traditional supply chains and tracking procedures. As such, traditional information from criminal charges prohibiting the purchase of firearms is no longer a barrier to sourcing.
- Evidence is digital, rather than physical. Similarly, the marketplace for sourcing, payment, delivery, and linkages between CAD and printing are digital transactions that result in physical harm.
Functional Job Analysis of Police and Law Enforcement Intelligence
Scope and Method.
A functional job analysis drew on Department of Labor O*NET profiles for three occupations: Police and Sheriff’s Patrol Officers, Detectives and Criminal Investigators, and Police Identification and Records Officers. Job tasks, knowledge, skills, and abilities were mapped to identify how agencies can use AI to strengthen against emerging terrorist threats. Six core functions of law enforcement were identified: (1) patrol and incident response, (2) investigation and evidence handling, (3) case documentation and records management, (4) court and legal proceedings, (5) collaboration and external coordination, and (6) technology and specialized tool use.
How Emerging Threats Change Work
Terrorist use of AI and consumer-grade 3D guns compresses the time between intention and implementation of nefarious ideas. The speed of generation increases the volume of open-source material to review, shortens warning timeframes, and raises the odds that the police may encounter synthetic media or firearms. Investigators must also account for the digital nature of the evidence, as posts can be deleted or altered.
Moreover, additive manufacturing shifts key evidence from solely the physical world to digital. Instead of tracking physical evidence, serial numbers, or paperwork, tracking 3D guns involves investigating CAD files and online transactions. As a result, more cases are dependent on digital footprints, such as files, payment flows, shipping, and forum activities. These dynamics expand the skillset required for police investigators that is outside traditional workflows. For example, first responders may triage suspected deepfakes or digital content.
Using AI to Counter These Changes
The increased reliance on emerging technologies by terrorist organization amplifies the need for police agencies to use them to a similar extent. AI tools can be integrated as an “augmented layer” across everyday job tasks. These tools can serve as analytical partners by filtering information, recognizing patterns, and automating evidence collection. In patrol and response scenarios, AI systems can summarize open-source data, translate languages, and flag suspected synthetic media prior to deploying resources. During investigations, automated AI tools can preserve and verify evidence, connect online propaganda to real-world incidents, and map supply chains for unregistered ghost guns. In all, the use of an AI system can help narrow and streamline data collection and integrity.
In documentation, AI can generate detailed reports outlining the data sources and models used, validating its steps to build transparency with each use. For cross-agency collaboration, AI-driven information sharing can help track emerging threats and alert investigators to similar cases. An AI-driven system can proactively recommend similar cases or expertise across management to ameliorate silos of information. AI can disseminate information to the correct allies or agencies to highlight what is actionable for them. This can ensure data and intel research for the relevant stakeholders.
When integrating AI into police workflows, maintaining human judgment for decision-making is key. AI might propose customized action plans but could overlook factors such as legal or ethical considerations. This means that the AI output could help in the application of evidence or findings, but it should be monitored and reviewed. As the threat landscape evolves, the police must follow suit. With the integration of AI, the envisioned future is one where AI tools are treated as a trusted teammate. As such, this also presents opportunities to enhance the work at the individual, team, and organizational levels.
Recommendations for AI Integration
Drawing on the six core police functions that may benefit from AI, identified earlier, in tandem with the emerging threats outlined, the authors have developed organizational-based recommendations to align core jobs tasks with the integration of AI for threat prevention. When integrated carefully, AI can become a force multiplier by accelerating analysis, improving evidence, and allowing human experts to focus on context, judgment, and protection.
1. Organizational Culture & Climate
Fostering trust and safety will mitigate the risk of employees viewing AI as a threat to their role and resisting adoption. Instead, employees should view AI as an opportunity and use the AI outputs as inputs from a valued team member. Building this trust requires creating a climate of transparency, user involvement, and reassurance that humans will remain a part of the decision-making processes. To ensure this is done properly, police agencies must first establish ethical guidelines and guardrails for AI use and require explanations of use of AI whenever possible. Organizations can create guidelines and standard operating procedures that govern the entire AI life cycle, including development and testing, deploying, and ongoing monitoring of systems.
2. Selection and Training
The current job descriptions for police roles do not encompass the expertise that complements the use of AI and interpreting its results. The use of AI can contribute to a deeper analysis—thus emphasizing a requirement for data literacy can aid employees to understand where the data came from, what processing should be applied to the data, and how the AI tool generated its outputs. In addition, creating additional roles for prompt engineers or AI liaisons, or simply training current personnel in those skills, could aid in properly crafting queries. After redefining job roles and requirements, specific training efforts are needed to help the existing workforce. The training should be focused on digital fluency, as well as skills in prompt engineering, interpretation, and other ways to enhance work involving AI tools. Creating dedicated training programs that cover technical and cognitive aspects of AI usability will demystify the tool and help policing professionals understand how to properly integrate it into their workflow.
3. Overcoming Organizational Change
None of these recommendations can be successfully implemented without police agencies allocating appropriate resources for data engineers and AI model management. A strong organizational change plan should be developed to emphasize that AI is needed to improve threat detection and prevention, and the AI tools should be considered co-creators. Planned organizational change also must be congruent and consistent across different functions, such that new behaviors and skills can be acquired. Planning can also reduce ambiguity around the integration of AI, thus motivating and directing followers for the change. The three activities involved in planning include (1) communication, (2) mobilization, and (3) evaluation.
Concluding Comments
The use of AI and 3D printed firearms lowers the costs and skills needed for terrorists and other malign actors, while increasing the speed and volume of threats that investigators must manage. Agencies who aim to implement AI into their existing workflows to match the skills needed to detect, prevent, and protect against these emerging trends must consider holistic organizational changes for seamless integration.
Notes:
1Fabrizio Minniti, “Automated Recruitment: Artificial Intelligence, ISKP, and Extremist Radicalisation,” GNET Insights, April 11, 2025.
2Tom O’Connor, “Generating Jihad: How ISIS Could Use AI to Plan Its Next Attack,” Newsweek, updated September 22, 2025.
3Homeland Security Assessment of Terrorists’ Use of Ghost Guns Act, H.R. Rep. 116-88, 116th Cong., 2019.
4U.S. Department of Justice (DOJ), “Man Arrested and Charged with Attempting to Provide Al-Qaida with Weapons,” press release, September 24, 2025.
5Alexis L. d’Amato and Samuel T. Hunter, “Creativity Training Needs Assessment for Homeland Security Enterprise: A Case for Creative Thinking,” Journal of Policing, Intelligence and Counter Terrorism 19, no. 1 (2024): 61–82.
6Executive Office of the President of the United States, Winning the Race: America’s AI Action Plan (The White House, 2025).
7Alexis L. d’Amato and Samuel T. Hunter, “Malevolent Innovation in the Workplace,” Organizational Psychology Review 15, no. 4 (2025): 420–452.
8Namo Abdulla, “UN Officials Warn ISIS Using AI to Rebuild,” Rudaw, August 21, 2025.
9DOJ, “Man Arrested and Charged with Attempting to Provide Al-Qaida with Weapons.”
10Everytown Law, “Family Sues Online Firearm Dealer Husky Armory for Selling Ghost-Gun Kit to Underage Teenager, Resulting in His Death by Handgun Suicide Days Later,” press release, July 28, 2025.
Please cite as
Alexis L. d’Amato et al., “Confronting Terrorist Innovation: Organizational Psychology Approaches to Strengthening Law Enforcement,” Police Chief Online, March 04, 2026.


