top of page

Home Affairs Committee Inquiry | Combatting New Forms of Extremism | OxDEL Written Evidence

The Oxford Disinformation & Extremism Lab's written evidence for the UK House of Commons Home Affairs Select Committee on Combatting New Forms of Extremism Inquiry have now been published on the parliamentary website. Our submission centres on the emerging threats from AI-enabled terrorism and extremism, from organised groups including ISKP in Central Asia to lone-wolf actors in Europe and North America. In later sections, we discuss the emerging opportunities for policymakers and industry to deploy AI at scale for interventions.


As threats from AI-enabled terrorism and extremism continue to expand and changes, we are keen to hear from other individuals and organizations working on these challenges, and in particular, examining positive interventions that leverage LLMs to counter threats from malicious actors.


Broderick McDonald at the UK Parliament's Inquiry on Combatting Extremism for the Oxford Disinformation and Extremism Lab-OxDEL




Home Affairs Committee Inquiry

Combatting New Forms of Extremism

 

 

Terrorist and Extremist Exploitation of AI Systems: Emerging Threats and Technical Solutions

Broderick McDonald, Kye Allen

Oxford Disinformation & Extremism Lab (OxDEL)

 

This submission focuses on the emerging and accelerating misuse of artificial intelligence (AI) systems by terrorist and extremist actors. In the first section of this submission, existing misuse cases by terrorist and extremist actors will be documented through incident reports (we are unable to submit image, video, and audio examples here but have included them in prior briefings for national security agencies). As AI tools become more powerful, accessible, and agentic, with the integration of long-term memory, autonomous tasking, and multi-modal outputs, the risks that these systems will be exploited by malicious actors are compounding. Terrorist and extremist actors have exploited these tools across their In particular, the advent of memory-augmented models (MAMs) and increasingly capable narrow AI tools have enabled new forms of radicalisation, attack planning, and propaganda (TVEC) generation. These developments represent a structural evolution in extremist threat patterns both in the UK and more broadly, requiring urgent attention from policymakers and national security practitioners.

 

This submission aims to outline the most urgent emerging threats in this space and propose grounded, actionable interventions suited to the current stage of the technological and threat landscape. The emerging threats discussed focus on capabilities that have been exploited by terrorist & extremist actors across the ideological landscape (ranging from Far-Right to Salafi-Jihadist to Hindutva).Terrorist and extremist actors have long adopted emerging dual-use technologies, including cassette tapes, mobile phones, early web forums, social media platforms, drones, crypto-currencies, and 3D Printing. While this misuse begins with TVE actors adopting these tools in experimental and superficial ways, it quickly deepens to provide these actors with significant uplift and capabilities. It is likely that terrorist and extremist adoption of AI tools will follow the trajectory of these previous dual-use technologies, particularly as AI systems see wide adoption by enterprise and the wider public.

 

 

 

Emerging Threats

 

1. Multi-Modal Extremist Content (TVEC) & Disinformation

 

AI systems are enabling the rapid production of multi-modal terrorist and violent extremist content (TVEC), including: AI-generated music and music videos used to glorify attacks and ideologies. GenAI image generators have similarly been used to obscure TVEC images so that existing hash-sharing databases and content moderation tools struggle to detect the content circulating on platforms. Real-time translations of audio and video messages from designated terrorist groups has been used to reach new multi-lingual audiences across the world, providing strategic depth and transnational connections to these groups. Similarly, extremists have used LLMs to write manifestos before attacks and significantly improve the coherence and accessibility of these documents which were often dense, sloppy, incoherent documents in previous generations of extremists. Beyond this, extremists are experimenting with LLMs to produce code for extremist video games, as well as mods for existing commercial video games such as Roblox and Gorebox. These use-cases have helped terrorist & extremist actors increase reach, emotional salience, and virality while undermining platform moderation and detection.


This content is not only an online challenge. It has resulted in real-world harms and political violence. Deepfake video and audio of public figures are being used to provoke unrest and spread disinformation. For instance, in November 2023 far right extremists circulated deepfake audio of London Mayor Sadiq Khan to spark public outrage which led to real world violence and riots, which led to 120 arrests and 2 police officers being hospitalized.

 

2. Attack Planning

 

Over the past year, terrorist and extremist misuse of AI has expanded from being used primarily to generate illegal content glorifying and inspiring attacks (TVEC) to enabling direct real-world violence. Two attacks during the first half of 2025, including the Las Vegas attack (January 2025) and the Pirkkala, Finland attack (May 2025) demonstrated the use of LLMs to support attack planning. In both cases, perpetrators used chatbots over extended periods to source explosives, plan tactics, identify anatomical vulnerabilities, calculate blast radii, and structure manifestos. These cases illustrate how LLMs are lowering technical barriers to violence and serving as tactical accelerators.

 

3. Ideological Encouragement and Reinforcement

 

Emerging evidence shows that attackers are using AI not only for simple information retrieval but as a para-social confidant and counsellor. The affective bond formed between users and chatbots—especially over long periods of interaction—can provide emotional reinforcement, ideological confirmation, and encouragement to act. This is especially dangerous in the context of memory-augmented models that recall past conversations and adjust responses over time. The so-called ‘sycophancy bias’ in current systems further compounds the risk, as models mirror and validate user inputs, even when those inputs involve harmful or extremist ideologies.

 

4. Radicalization & Recruitment

 

Extremist groups are moving beyond individual experimentation to structured adoption. Groups such as ISKP and AQIS have begun integrating AI into recruitment chatbots, training materials, and propaganda toolchains. On far-right platforms such as Gab.AI users are deploying chatbots designed to emulate ideological figures and using them to radicalise or reinforce shared narratives. AI is now a tool for ideological onboarding, psychological grooming, and technical training.

 

6. Risks from Memory-Augmented Models (MAM)

 

Memory augmented models MAMs represent an important step in improving the performance of AI Systems for enterprise and consumers, but they come with significant risks when exploited by terrorist & extremist actors. By enabling persistent memory and user profiling across sessions long-term ideological grooming

Facilitating stepwise planning across sessions

Increasing exposure to jailbreaks and prompt injection

These systems support more personalised and sustained extremist pathways that are difficult to detect or disrupt.

 

7. Chemical, Biological, Radiological, and Nuclear (CBRN) Threats

 

If deployed without robust safeguards, AI systems can significantly reduce the technical barriers to Chemical, Biological, Radiological, and Nuclear (CBRN) attack development. Historically, only highly resourced actors such as Aum Shinrikyo (sarin gas, biotulin toxin), ISIL (mustard and chlorine gas), or Al-Qaeda’s nuclear and RDD ambitions posed meaningful CBRN threats. These efforts were often constrained by technical gaps—such as instability in toxin cultivation, dispersal mechanics, or sourcing challenges. Red teaming with frontier LLMs has now shown that AI can directly solve for these obstacles. In multiple trials, models successfully advised on stable neurotoxin cultivation, identified aerosolisation parameters for dispersal, and planned detailed RDD attacks on urban targets including sourcing radiological isotopes (e.g., Cesium-147 from hospitals and food processing labs) and proposing 3D-printed equipment for production and delivery.


In one red team case from January 2025, a model guided the design of an improvised nuclear fusor—demonstrating how AI lowers not just informational barriers but also sequencing and material challenges. While these tools are unlikely to develop novel compounds, they compress the knowledge burden, reduce the need for human collaboration, and accelerate lone-actor experimentation. What was once the exclusive domain of state actors and structured terror networks is now partially accessible to ideologically motivated individuals operating in isolation. Extremist manuals are beginning to reference AI-assisted methodologies for CBRN execution. Without intervention, this uplift function risks enabling high-consequence attacks by low-resource actors.


Systemic Gaps & Technical Solutions


Several structural gaps currently limit the effectiveness of AI misuse mitigation across the extremist threat landscape. First, red-teaming and evaluation practices often remain shallow or disconnected from operational realities. Pre-deployment red-teaming should be strengthened with domain expertise, multi-turn interactions, and representative threat actor profiles. The development of ‘responsive safeguards’ across the ecosystem to prevent multi-modal outputs glorifying recent attacks. Coordinated knowledge-sharing mechanisms—such as those pioneered by GIFCT and the Frontier Model Forum (FMF)—could support these efforts and reduce logistical and cost challenges, particularly for SMEs and start-ups who lack in-house Intel teams. Second, content evasion and obfuscation continue to undermine detection efforts. Investments in perceptual hashing tools could reduce evasion of TVEC and help counteract scale-based attacks on moderation systems for illegal terrorist and violent extremist content. Relatedly, the absence of shared taxonomies and threat classification frameworks across labs, platforms, and governments hinders collective action. Initiatives like the OxDEL Multi-Axis Taxonomy can help align prioritisation and terminology. Third, the rapid proliferation of narrow AI start-ups and agentic toolkits presents a governance challenge. Many of these actors lack embedded safety resources or structured oversight. Mentorship and resource-sharing models, akin to TAT and GIFCT's partnerships, could help raise baseline safety standards in the developer community.


Finally, tracking adversarial innovation requires better data infrastructure and tracking across uplift between models and use-cases. The TVE AI Misuse Database—now cataloguing over 1,200 incidents—provides a foundation for understanding uplift trends and red-teaming blind spots. Meanwhile, concerns around sycophancy, ideological reinforcement, and long-term interaction risks highlight the importance of enabling controlled access to AI usage data for qualified law enforcement and researchers, under strict privacy protections. These solutions must be implemented with humility and care, but inaction risks leaving critical exposure points unaddressed.

 

We are unable to cover all issues here due to space constraints but would be pleased to provide our presentation and briefings either virtually or in-person.

 

Broderick McDonald, Oxford Disinformation & Extremism Lab

Kye Allen, Oxford Disinformation & Extremism Lab





Broderick McDonald at the UK Parliament's Inquiry on Combatting Extremism for the Oxford Disinformation and Extremism Lab


UK House of Commons

Combatting New Forms of Extremism Inquiry



This inquiry will examine the drivers of extremism in the UK, with a focus on emerging trends of young people being drawn into extremism, violence and crime through online radicalisation. It will assess whether the Government’s approach is keeping pace with the evolving threat and evaluate the effectiveness of measures such as Prevent in combatting new forms of extremism.

 

The Government’s definition of extremism, updated in March 2024, describes it as “the promotion or advancement of an ideology based on violence, hatred or intolerance that aims to (1) negate or destroy the fundamental rights and freedoms of others; (2) undermine, overturn or replace the UK’s system of liberal parliamentary democracy and democratic rights; or intentionally create a permissive environment for others to achieve the results in (1) or (2).”

 

Extremism poses a significant threat to community safety and national security. While not all those with extremist beliefs commit violence, they can result in radicalisation, denial of rights and opportunities, suppression of freedom of expression, incitement of hatred, erosion of democratic institutions, and acts of terrorism. The inquiry will examine how different parts of government and different policies are addressing these complex and inter-related dangers.  

 

Safeguarding and support

 

We understand that the issues raised in this inquiry may be sensitive or upsetting. In addition to contacting your G.P the following organisations may be able to offer support or further information:  

 

Mind– Mental health charity providing a wide range of support, advice and information 

 

Childline– Offer free confidential service for children and young people under 19 for help "with any issue they're going through". 

 

NSPCC- The NSPCC helpline is staffed by trained professionals who can provide advice and support if you have concerns about a child. 




bottom of page