top of page

The Use of AI in Disinformation & Extremism: Separating Fact from Fiction

Towards A New Taxonomy for the Use of Artificial Intelligence in Disinformation and Extremism



AI, Disinformation, Extremism Broderick McDonald

Broderick McDonald

Oxford Disinformation & Extremism Lab

 

Artificial intelligence (AI) has the potential to reshape disinformation and extremism in unpredictable and significant ways. However there are also risks of overestimating the impact of these new technologies before they have fully developed. As with most technological leaps, we do not yet know fully how AI tools will be used or how they will impact society, elections, and public safety. Despite this, there is a steady drumbeat of journalists, researchers and analysts predicting AI will destroy all that we trust. On the other side of the spectrum, the technology’s apologist claim that any difficulties presented by AI should be brushed away as merely minor inconveniences on the inevitable path to technological progress. Both of these camps are dangerous. While one camp underestimates the threat and buries its head in the sand, the other sees the technology behind everything that can go wrong. Rather than associating with either side, this article takes a more cautious approach based on the observed use cases today and where they may be heading.


Keeping this in mind, there are clear use cases where Generative AI has been used to produce, augment, and share online harms. These examples are important to understand both the present landscape and where it is moving. However to systematically understand how AI is being used in disinformation and extremism, a basic taxonomy is needed. To keep this taxonomy as broad as possible, we can conceive of the use of AI in disinformation and extremism conforming in four ways. These include a) automated production b) automated distribution c) improved engagement and d) reduced identification. Each of these modalities will be discussed below with examples to illustrate how artificial intelligence and machine learning tools are changing the landscape of disinformation and extremism, but we should keep in mind that these developments remain at the experimentation phase. Extremists and rogue state actors are rarely on the cutting edge of technological adoption or innovation. Their use of new tools is opportunistic, allowing them to exploit the gaps between development and regulation, or technical mitigations which erode the initially large benefits of these technologies. In this sense, the development of GenAI and Web 3.0 parallels the rapid proliferation of social media over the past two decades.


Indeed, artificial Intelligence tools have the potential to exceed the impact that social media had on public life over a decade ago. As AI tools are rolled out to consumers, it is important policymakers and industry do not repeat the mistakes that were made during the introduction of social media. A lack of technical safeguards, underinvestment in trusty and safety teams, and slow regulation enabled the widespread exploitation of these platforms by malicious actors during the introduction of social media. These challenges were seen most acutely in the widespread early adoption of social media by extremist groups such as the Islamic State of Iraq and al-Sham (ISIS) which used these tools to recruit, spread propaganda, and coordinate attacks before industry and policymakers cracked down on their use. As with this previous wave of technological change, the rollout of AI tools will be marked by new threats, challenges, and regulatory lag. While in the long-term many of these threats are likely to be addressed, the short-medium term will be fraught with significant risk. Given the potential for GenAI to produce more potent and dangerous threats than the introduction of social media, it is imperative that policymakers and industry do not repeat past mistakes.


While the use of Generative Artificial Intelligence by extremists and rogue states is largely experimental for now, the widespread adoption of these tools is growing at the same time as the capabilities are becoming more sophisticated. While examples of these threats are discussed below to illustrate their dangers, the focus is on developing pro-active safeguards to avoid mistakes made in the past and keep these tools being exploited by malicious actors.

 


 

Automated Production: Disinformation and Extremism at an Industrial Scale

 

While AI has been part of our technological repertoire for nearly a decade, the accessibility of GenAI tools has ushered in a new era of production capabilities. Individuals and small groups now possess the means to produce propaganda on an industrial scale, fundamentally altering the dynamics of extremism. This shift is observable across the political spectrum, with GenAI tools being utilized by a diverse array of extremists, from Far-Right groups to Salafi-Jihadist extremists.


Traditionally the creation and dissemination of extremist ideologies relied on slow, labour intensive effort undertaken by small teams or media offices, limiting their scale and reach. However, the integration of GenAI tools in the production of extremist propaganda has significantly altered this landscape. Using existing consumer products, one individual, or a 'lone wolf' actor, can produce hundreds of pieces of content in the time it would have taken to produce a single item. Moreover, limited technical training and resources are necessary with fremium models, trials, and open source programs becoming the norm. As a result, GenAI has upended the traditional paradigm by automating the production process and exponentially expanding the scale and reach of disinformation and extremist content. LLMs and Image Diffusers can now generate text, images, and even videos that can exceed the scale and quality of human-produced content. More content than ever can be produced at a cost lower than ever before. This presents unique challenges for policymakers and industry. The use of not only increases the volume of content produced, it can also enhance the quality which will be discussed below

 


The Gamification of Extremism


Discussions of online harms rarely pay adequate attention to video games and the central role it can play in spreading extremism, disinformation and hate speech. The gaming industry is larger than Extremists have seized upon the immersive and interactive nature of video games as a potent platform for disseminating their ideologies and the use of AI to create custom games quickly will expand this trend. Beyond merely altering existing games, extremists are now creating their own, injecting propaganda and extremist narratives directly into the virtual landscape. This shift allows for a more dynamic and engaging form of recruitment, particularly among a younger audience. With gaming adjacent platforms like Discord and Twitch, LLMs can be used to spread extremism and disinformation at scale, as well as training on how to use these tools. Indeed, Discord is already the de facto home of MidjourneyAI where users of all backgrounds post their creations, share feedback, and offer advice on prompt engineering. Much of the Midjourney community on Discord has nothing to do with either extremism or disinformation but the rapid adoption of platforms like this demonstrates how they can be used to train and disseminate knowledge of how to use these tools.

 

AI-Produced Manifestos


The manifestos of extremists and mass shooters have historically been characterized by poor organization, sloppy writing, and copy-pasting. However with the advent of large language models and generative artificial intelligence it is possible for these manifestos to become significantly more polished, accessible, and persuasive. Extremists and mass shooters can now use consumer GenAI to produce manifestos that are not only coherent but also strategically organized, conveying their ideologies more persuasively than in the past. This evolution poses a significant challenge for authorities and platforms tasked with limiting the dissemination and influence of such manifestos. At this stage, AI tools have powerful applications for producing high-quality writing at scale. To demonstrate this, much of this article was drafted and written with the assistance of an LLM. While AI produced language still requires human editing and changes as well as additions, it can be powerful tool for catalysing writing and when applied to online harms we should be concerned about the risks of high quality extremist texts as well as disinformation becoming more widespread.

 

Image Manipulation at Scale


Extremists exploit have begun using GenAI's ability to produce endless images with minor changes to evade content moderation tools that rely on hash-sharing. These small alterations to AI-generated images create small variations that slip through detection, highlighting the need for innovative countermeasures. Moreover, as with The challenge is exacerbated by the adaptability of AI-driven adversaries, who quickly evolve to circumvent existing safeguards.

 

The Dividends of Deception: Erosion of Public Trust

 

The Liar's Dividend: Exploiting the Prevalence of Deepfakes

The prevalence of deepfakes introduces the "liar's dividend," a phenomenon where fringe politicians or ideologues may exploit the technology. When confronted with shocking statements attributed to them, individuals may falsely claim that these statements are deepfakes, adding an additional layer of complexity to the verification of audiovisual content. When political ideologues or politicians abuse the liar's dividend, they will simply claim that an embarrassing or incorrect quote or clip of them was AI generated nonsense. While there certainly will be cases where AI will be used to frame public officials, put words in their mouth, or create convincing deepfakes of them doing embarrassing things, we should not ignore the risk that some will use these examples as a scape-goat. We should grant innocence until proven guilty in all cases but the advancement and widespread adoption of provenance (C2PA) can help us track the origins of a photo or video from the point of creation.

 

The Spoilers Dividend: Eroding Trust in Objective Truth


One of the major threats of mass AI disinformation is the spoilers dividend, wherein the public's trust in objective truth is eroded. As AI-generated content blurs the lines between real and fake, individuals may become increasingly skeptical of information, fostering an environment of uncertainty and distrust. The most significant risk associated with the Spoilers Dividend is if public trust in institutions, knowledge, and experts declines even further. In an era where it is already difficult to determine fact from fiction, we may well see further erosion of public trust which can further destabilize our elections, health care systems, and democratic institutions


AI-Enhanced Production and Distribution of Disinformation

Plenty of examples demonstrate how bots and algorithms can manipulate social media platforms, creating fake accounts, amplifying divisive content, and fostering echo chambers that reinforce extremist ideologies. These AI-empowered campaigns are not only difficult to detect but also adaptive, evolving in response to countermeasures. This adaptability prolongs their impact and effectiveness, making it an ongoing challenge for platforms and authorities to stay ahead of the evolving tactics employed by purveyors of disinformation. The use of AI in disinformation campaigns also extends to the creation of realistic chatbots and virtual personas. These entities can engage with users on various platforms, disseminating propaganda and reinforcing extremist views. The ability of AI to simulate human interaction makes it challenging for users to distinguish between genuine communication and AI-generated content, further blurring the lines between reality and manipulation.

Micro-Targeted Recruitment and Radicalization

Beyond content creation and dissemination, AI plays a crucial role in identifying and targeting potential recruits or sympathizers. Machine learning algorithms can analyze vast amounts of data to profile individuals susceptible to extremist ideologies. This targeted approach tailors the content to resonate with specific demographics, increasing the likelihood of recruitment and radicalization.

The personalized nature of AI-driven recruitment strategies makes them highly effective. Extremist groups can exploit individuals' vulnerabilities, feeding them tailored content that aligns with their pre-existing beliefs or grievances. This individualized approach not only accelerates the radicalization process but also makes it challenging for traditional counterterrorism efforts to identify and intervene in a timely manner.

 


Safeguards and Their Limitations


Safeguards are important but they do not offer foolproof protection against the evolving tactics extremists or other malicious actors. Users of LLMs such as ChatGPT or Claude can quickly adapt by altering prompts or changing details, rendering many protective measures obsolete. For instance, a Salafi-Jihadist extremist seeking to produce imagery or video clips depicting a battle can easily change their prompts to produce historical photos which convey the same meaning but evade blocks by presenting the content as a historical depictions within most LLMs and Image Diffusion models. As with the previous wave of extremism, these tactics will constantly evolve and when platforms close one loophole, extremists and malicious actors will find an alternative route. This cat-and-mouse game between defenders and malicious actors will be difficult to stop fully but the engagement of human review and academic experts in the content moderation work stream is important to limit its effects. This means greater investment into both Trust and Safety Teams, Human Moderators for Controversial Cases, and Red teaming during the design stage to ensure there is more adaptive, human-centric approaches to content moderation and online safety.


Ethical Considerations and Responsible AI Development:

As AI continues to evolve, ethical considerations in its development and deployment become increasingly critical. Developers and researchers must prioritize the responsible and ethical use of AI technology. This includes considering the potential societal impacts of their creations and actively working to mitigate any negative consequences.

Encouraging interdisciplinary collaboration between technologists, ethicists, policymakers, and sociologists is essential to address the complex challenges posed by AI-driven extremism. Ethical guidelines should be integrated into the development process, ensuring that AI technologies are aligned with human rights principles and do not contribute to the proliferation of harmful ideologies.


 


As the use of GenAI in extremism and disinformation evolves and changes, it is important to have an adaptive framework, or taxonomy, of these harms which allows researchers and trust and safety officials to bring order to the different ways in which these abuses manifest. Crucially this new taxonomy needs to be adaptive and be continuous updated by a team of human experts, content moderators, and civil society advocated to keep it human-centric and avoid mis-categorization and the human rights abuses that can come with such an approach. As the intersection of AI, extremism, and disinformation is reshaping civil society, it is crucial to engage civil society, practitioners, and academia in this process. As we stand at the crossroads of technological innovation and its responsible use, we must strive to strike a balance that preserves the foundations of a healthy and informed society. In an era where individuals and small groups wield the power of industrial-scale propaganda, resilience against the tide of GenAI-driven extremism and disinformation demands serious engagement with policy makers, academia, industry, and civil society

 




_____________________

 

 

Article based on talks given by Broderick McDonald at the Commonwealth Parliamentary Associations AI, Disinformation, and Parliament Conference and the OxGenAI Summit in 2023




Broderick McDonald Oxford Kings College London


 

 


 

 

 

bottom of page