Artificial intelligence and holocaust denial or revisionism

Read over the weekend an excellent new report by UNESCO AI and the Holocaust: rewriting history? The impact of artificial intelligence on understanding the Holocaust.

Just three days ago, I flagged journalist David Gilbert‘s article in Wired magazine, looking at neo-Nazi adoption, and adaptation of (generative) AI, based on research by The Middle East Media Research Institute. I ended by noting “Everything Gilbert noted in the Wired magazine article based on MEMRI’s research, that in turn maps on to research by ADL, and ISD is present in Aotearoa New Zealand. The realisation of this, however, shows significant variance even within government, even as adversarial innovation threatening the country’s democracy grows at pace.”

The UNESCO report, which is well worth reading in full, notes,

The threats associated with AI in safeguarding the record of the Holocaust are manifold, including the potential for manipulation by malicious actors, the introduction of falsehoods or dissemination of biased information, and the gradual erosion of public trust in authentic records. This paper provides a warning of what is at stake for the preservation of historical truth in a digital era increasingly mediated by AI.

This report highlights five major concerns:

  1. AI Automated Content May Invent Facts About the Holocaust: AI models have produced misleading or false narratives about the Holocaust. Data voids and biases have led to “hallucinations” in generative AI systems, producing incorrect or invented content that never occurred. Without AI literacy and research skills, users may not know how to verify AI-produced texts or recognise the unreliability of the data.
  2. Falsifying Historical Evidence: Deepfake Technology: Deepfake technology has the potential to manipulate audio and video to fabricate Holocaust-related content. There is a need for mechanisms to prevent the misuse of AI in purposefully creating fake “evidence” that undermines the veracity of the established historical record of the Holocaust and spreads hate speech. Deepfakes of celebrities have been used to spread Nazi ideology or to simulate conversations with Nazi leaders, including Adolf Hitler.
  3. AI Models Can Be Manipulated to Spread Hate Speech: Targeted campaigns by violent extremist online groups can exploit AI flaws to promote hate speech and antisemitic content about the Holocaust. Chatbots and search engines have been hacked or manipulated by bad actors to spread Nazi ideology.
  4. Algorithmic Bias Can Spread Holocaust Denial: Biased data sets have led some search engines and AI chatbots to downplay Holocaust facts or promote far-right content, including Holocaust denial.
  5. Oversimplifying History: AI’s tendency to focus on the most well-known aspects of the Holocaust oversimplifies its complexity. The omission of lesser-known episodes and events in the history of the Holocaust reinforces stereotypical representations of the Holocaust and limits our understanding of a complex past which affected people in every country in Europe and in North Africa, and whose legacy continues to be felt worldwide.

Based just on 2024’s study into, and related analysis around generative AI’s cancerous signatures in Aotearoa New Zealand’s disinformation ecologies, the five major concerns raised by UNESCO regarding AI’s threats to safeguarding the record of the Holocaust are reflected in several ways. The promotion of, and engagement with these harms is by domestic actors belonging to neo-Nazi, far-right, anti-government, anti-establishment networks now inextricably entwined with what were in 2021 were far more clearly defined, and separate anti-vaxx networks. On Telegram, there’s a free flow of foreign content, that inspires domestic production, as well as domestic discourse now also informing transnational neo-Nazi networks.

  • AI inventing facts about the Holocaust: TDP’s research highlights how Gab’s “Based AI” and other chatbots can generate false or misleading narratives about historical events, including the Holocaust. For example, the Hitler chatbot “Uncle A” produces antisemitic content and alternative historical narratives that could be mistaken for facts by users lacking proper context or critical thinking skills.
  • Falsifying historical evidence with deepfake technology: Our research mentions a tweet thread discussing AI-generated renderings of Hitler’s speeches into English using his voice. This demonstrates the potential for deepfake technology to create false audio evidence related to Holocaust history.
  • AI models manipulated to spread hate speech: The Gab AI chatbots, particularly those representing extremist figures or ideologies, show how AI can be exploited to generate and spread hate speech. TDP’s research flag several examples of chatbots promoting far-right, antisemitic, and white supremacist views.
  • Algorithmic bias spreading Holocaust denial: While not explicitly studied in relation to Holocaust revisionism or denial, TDP’s research shows how Gab’s AI models are built with inherent biases that promote far-right ideologies. This suggests a high likelihood of these models downplaying or denying Holocaust facts if queried on the topic.
  • Oversimplifying history: Our research doesn’t directly address this concern, but the nature of the conversations studied using far-right/neo-Nazi chatbots (e.g., simplified representations of historical figures like Hitler) suggests a tendency within Aotearoa New Zealand’s far-right, and neo-Nazi networks to use generative AI technologies to reduce complex historical events and figures to oversimplified caricatures. Fidelity to history, and established, shared facts are not concerns for far-right users or their preferred platform(s), which in turn gives rise to technology enabled, and AI amplified harms around the holocaust that play into, and are part of entrenched antisemitic discourse in disinformation landscapes studied.

TDP research/analysis over just 2024 also highlights additional concerns not explicitly mentioned in the UNESCO report, including, but not limited to:

  • The normalisation of extremist ideologies through casual interactions with AI chatbots: The TDP research highlights how Gab’s chatbots, with names like “Far-Right John,” “Adolf Hitler,” and “America First Zoomer,” present extremist ideologies in a casual, conversational format. This normalisation is particularly concerning as it makes engaging with extremist ideas seem commonplace, acceptable, and matter-of-fact. For instance, we conducted a chat with “The Great Replacement Views” chatbot, which easily, and casually discussed deeply racist, white supremacist, and xenophobic ideas. The chatbot’s response to questions about “The Great Replacement” and preventing immigration uses language that frames these extremist views as logical, urgent, and necessary, potentially normalising the same ideology that radicalised mass shooters including the terrorist behind the Christchurch mosque massacre for users who interact with it regularly.
  • The potential for AI-generated content to create pathways for radicalisation: The TDP research illustrates how Gab AI’s platform could serve as a radicalisation pathway. It notes that the existence of chatbots representing a wide range of political extremes, without a clear educational or critical framework, may inadvertently provide pathways for radicalisation. Situation reports describe how a link to a Gab AI conversation was gleefully shared, and widely engaged with on a domestic far-right network’s Telegram channel. This sharing ensures that Gab AI becomes known and potentially more widely used within far-right, WSVE (White Supremacist Violent Extremism) ecologies on Telegram and beyond. We explicitly state that “Gab AI models/chatbots are foundational harms, the output from which can be used to promote, perfect, and produce WSVE, antisemitic, and other VE harms, frames, narratives, and discourse.”
  • The challenge of regulating or monitoring AI systems developed outside mainstream ethical frameworks: The TDP research emphasises that Gab AI operates entirely outside the purview of domestic and international regulatory bodies, AI ethics frameworks, and initiatives like the Christchurch Call. This lack of oversight is explicitly noted: “Worth noting that Gab AI lies completely outside the remit of any domestic, and international entity/agency/TVEC guardrails/AI ethics framework.” Our research contrasts this with the approach of mainstream AI companies like OpenAI and Anthropic, which are engaging with initiatives like the Christchurch Call to address concerns about terrorist and violent extremist content (TVEC). The stark difference in approach highlights the significant challenge posed by AI systems developed without regard for established ethical guidelines or regulatory oversight.

The research I lead at The Disinformation Project showcases generative AI’s adversarial, and malign adaptation is a clear in the study of New Zealand’s disinformation landscapes. UNESCO’s report ends with very clear guidelines for policymakers, educators, and AI platforms that can inform the work on artificial intelligence led by (the wonderful) Dame Juliet Gerrard, New Zealand’s Chief Science Advisor.

I must stress though that nothing I’ve seen or studied from any source suggests the country’s grasp of what’s going on, and wrong in the world of (generative) AI, truth decay, and the growth of online harms is keeping pace with how quickly AI is evolving, and how it is adapted by neo-Nazis, and the far-right to produce, perpetuate, and promote harms. Complicating further the country’s flailing, and failing attempts to address online harms, and militant accelerationism’s daily imprint in New Zealand’s disinformation landscapes, UNESCO’s report flags a specific set of challenges related to AI that are present in the country as well.

Policymakers, academics, civil society, educators, entities like the Auckland Museum, and other institutions must address the threats, and risks these challenges pose to New Zealand by considering, inter alia, what UNESCO’s report recommends as measures to use AI to enhance understanding about the holocaust.

Cover image courtesy Wired magazine, Gab’s Racist AI Chatbots Have Been Instructed to Deny the Holocaust.