The Christchurch massacre as ‘false flag’: New Zealand’s flailing, and failing attempts to address online harms

In New Zealand, the Christchurch mosque massacre perpetrator’s livestream, and his screed are classified as objectionable. This means it is illegal to access, engage with, host or promote this material for anyone in the country. Additional material – including another terrorist’s screed, and a computer game – directly related to the livestream or inspired by the Christchurch killer’s screed have also been classified as objectionable.

The information environment in 2019 was a world apart, and near indistinguishable from contemporary domestic, and global digital theatres of terror. For starters, Twitter at the time was defined by a prosocial, global solidarity with the victims of the massacre, and the people of New Zealand. After Musk’s acquisition, the owner posts, and the platform hosts the same white supremacist violent extremism that fuelled the Christchurch killer.

In what’s a growing problem, this now includes the production, and discoverability of content online framing the aftermath of the March 2019 massacre, but isn’t classified as objectionable.

One example is in the form of a video, shot from inside the Linwood mosque soon after the massacre, showing the carnage inside. This mobile phone video footage was clearly filmed by one of the first civilian responders on the scene, with some segments aired on British mainstream media in 2019. Much of the video isn’t fit for broadcast given what’s captured, and depicted – which out of respect for the kith, and kin of victims, I won’t describe in detail. It’s not hard to imagine.

Context is key. British mainstream media aired this footage at the time to show the carnage left behind by the killer, by an eye-witness who recorded it. It wasn’t intended to glorify the act, or amplify the killer’s violent extremism, and white supremacy. However, the presentation of the full video on Twitter is now with a completely different intent, and motivation.

The screenshot above is from a tweet published late April by one of New Zealand’s leading far-right, and white supremacist platforms, which this year was tellingly invited by the Kremlin to join a gathering of the world’s leading Putin apologists held in Moscow. This video is now used to promote the conspiracy that the 2019 Christchurch mosque massacre was a ‘false flag’ – the same trope Alex Jones mainstreamed in the US to suggest school shootings (including Sandy Hook), and other high casualty terrorist attacks were staged. Variants of the video now feature a voice over, and sound track by an indeterminable male. The narration suggests that each victim is a ‘crisis actor’, that the blood is fake, and that everything inside the mosque is essentially a film set.

It is hard to communicate the violence of this, including the appropriation of an eye-witness account to further a conspiracy that seeks to deny the incident ever happened – which is to say, that those killed are somehow still alive.

A reply to this tweet, amongst 26 posted to date, noted “There are so many of these videos floating around out there, that more than prove that the whole thing was nothing but very b-grade theatre.” The video embedded in the tweet has been viewed over 9,000 times. The tweet was retweeted 47 times, liked 44 times, and quote tweeted 7 times. Accounts which engaged with the original include some of the worst, card-carrying neo-Nazis, white supremacists, and members of the far-right in New Zealand, who all now maintain very active public Twitter accounts.

Earlier this week, a version of the same video – without the narration – was posted to another Twitter account. This account had replied to the tweet posted in April as well, noting the video was a “false flag rehearsal practice”. In the tweet posted earlier this week, this account noted again that it was a “B grade production”, which is presented as the reason why the NZ Classification Office banned the original livestream, that the the video in the tweet proved that it was a “dress rehearsal complete with fake blood”, with “crisis actors on their phone”, intended to defraud “the NZ public”.

A study of the tweets published on this account clearly links it to white supremacism, and violent extremism – with links in the bio to accounts on Gab featuring far more violative content, and commentary. A large subset of the tweets from this account I studied revealed a pervasive, and persuasive presentation of scepticism, anger, and derision directed towards mainstream institutions, perceived elites, and various marginalised groups including the GLBTIQA+ community, and immigrants. A distrust of official narratives results in the presentation of conspiratorial thinking, with allegations of staged events, coverups, and (Jewish) cabals secretly controlling world affairs. This cynicism is compounded by performative, strident outrage over highly emotive issues like child abuse, government overreach, and a perceived erosion of freedoms, often expressed through insults, mockery, and sarcasm aimed at liberal views, political correctness, transgender identities, and, tellingly, Muslims. In fact, the prejudice against Muslims was a defining discursive signature, portraying them as a threat, inherently criminal, and dangerous to Western civilisation. White Christian nationalism was both explicit, and implicit in the tweets, promoting a belief that ‘traditional values’ (in the West) are under attack through immigration, which is associated with liberal degeneracy.

The eclectic mix of far-right ideas, mainstreaming of violent extremist narratives, white supremacy, institutional distrust, and transnational references in this account’s tweets exemplify the fluid, hybridised nature of the mixed, unclear, and unstable violent extremism (MUUVE) threat described in the ISD (Institute for Strategic Dialogue) Policy Paper, Mainstreamed Extremism and the Future of Prevention.

MUUVE’s signature in New Zealand’s anti-vaxx, anti-government, anti-establishment information ecologies – through community, and network bridging dynamics I’ve studied worsen at pace since 2023 – is profoundly worrying, and under-appreciated. It is connected to, but distinct from the simultaneous, and significant increase in accelerationist, and Terrorgram material in the country’s disinformation media ecologies.

Under New Zealand’s current government, there’s been a worrying jettisoning of critical architectures, safety nets, and official capabilities to address online harms – in addition to more distressing measures to cut, and curtail March 2019 victim support. What these cuts are contributing to is now apparent. A severely hamstrung Digital Safety Team at the Department of Internal Affairs (NZ) shows an inability or unwillingness to meaningfully investigate and appreciate the context framing this content’s presentation, and ‘false flag’ farming.

Late April, and again earlier this week – as a duty of care to New Zealand’s general public, given what I study – I reported this violative material to DIA. The response was telling. Underscoring that manipulated video was extremely offensive, DIA nevertheless said that offensiveness alone isn’t grounds for a determination of the material as objectionable. It also noted the original eye-witness video has been classified R18 “because children and teens are likely to find it disturbing and upsetting to such an extent that it is likely to have a negative impact on their mental health and wellbeing”.

In part, I agree with DIA. The bar – as it always has been in New Zealand – for the classification of material considered objectionable has been, and must remain, very high. Where I disagree is with DIA’s telling silence, in their response to me, around what can, and should be done to address the seed, and spread of the manipulated versions of the video on Twitter (which is part of the Christchurch Call Foundation), and other platforms like Telegram (on which I’ve also studied its viral spread).

Circulating this type of disturbing imagery can be extremely traumatic and upsetting, especially for the victims, their families, and the wider community impacted by this horrific attack. It both adds to, and forces them to relive trauma. The ‘false flag’, and ‘crisis actor’ tropes – exactly like what Alex Jones did in the US – are demonstrably untrue. Spreading, and weaponising lies is a violent affront to victims, and downplays the reality of the racist, Islamophobic motives behind the mosque massacre, espoused by the killer to date. It’s an attempt to rewrite history by white supremacists.

The unchecked production, and proliferation of this content enables the spread of radicalising far-right propaganda. This is happening in the broader, offline context of a Coronial inquest into the Christchurch attack. From the time the inquest began, there’s been an uptick in the content, and commentary (re)presenting the massacre as a ‘false flag’ online. I would argue that the spread of this content (including through algorithmic amplification, and recommendation) causes real harm – it re-traumatises victims, provides a platform for extremist network, potential pathways to radicalisation, may enable recruitment, undermines social cohesion, and shows profound disrespect for those murdered.

The moderation of this material is not about censorship, but applying what on Twitter are established platform policies to “remove or reduce the visibility of Violent Content to ensure the safety of our users and prevent the normalisation or glorification of violent actions”. More fundamentally, it is about basic human decency, and a principled refusal to let terrorists, and their apologists exploit murder, and violence to advance a hateful agenda, which undermines the freedom of all citizens in New Zealand, including Muslims, to live in safety, and dignity.

DIA’s response is not guided by any of this. The very legalistic, rigid focus on classifications fails to engage substantively with the broader issues raised by evidence-based research – located in New Zealand, and in many other countries – on how offline harms are informed by the online spread of extremist content. This rigid fidelity to classification, and the study of just a single tweet, also completely ignores the violent extremist ideology defining the account(s) which promoted the video, as noted above.

To wit, DIA appears to be unaware of or disinterested in the broader ecosystem of disinformation, and extremist narratives within New Zealand, in which these video are being shared, and weaponised. While the specific video concerned may not meet the legal threshold for restriction as objectionable content, this emphasis sidesteps the what’s clearly brought out in my research around the video’s instrumentalisation in spreading pernicious conspiracy theories, and re-traumatising victims at the same time as the Coronial inquest – which renders it deeply problematic. The desensitising effects of the casual, mocking discourse surrounding this disturbing content seemingly escapes DIA’s gaze. Moreover, foregrounding concerns over free expression without substantively engaging with the need to balance this against the harms of allowing content produced by violent extremists to proliferate unchecked – normalising hate, revising history, denying factual accounts, eroding of social cohesion, fostering of radicalisation, and undermining democracy – is another problematic stance.

By treating the video in isolation, DIA’s response neglects what I would stress as a necessary emphasis on the fluid, transnational nature of the online extremist ecosystem, in which harmful narratives propagate across borders and platforms in ways ill-suited to siloed, passive monitoring of individual pieces of content, or the study of in a piecemeal manner.

In sum, DIA’s response, which though I was grateful to receive, falls short of a range of proactive measures – including through intermediary intervention it has the mandate to demand – necessary to comprehensively constrain, and counter the spread, and impact of content informed by violent extremism, even if the classification as objectionable isn’t warranted.

From MUUVE’s imbrication to white supremacy’s invocation, and accelerationism’s imprint, New Zealand’s truth decay, and erosion of information integrity shows absolutely no signs of abating. The increasing rate of production, enhanced reach, and sustained engagement with material like the videos featured above is deeply problematic, and must be studied in the context of, inter alia, New Zealand’s social cohesion in tatters, vaulting racism, and homophobia, and unprecedented violence targeting MPs.

However, what seems blindingly obvious, seemingly escapes New Zealand’s lead agency dealing with online harms. Perhaps it is a sign of the times, and DIA’s doing only what they can – given significant staff cuts, and what’s likely a resulting institutional paralysis around more meaningful responses to online harms. I can appreciate this to an extent, but fear citizens – especially those who are already marginalised, and vulnerable – are increasingly exposed to a troubling, transnational tapestry of toxicity officials can’t, and don’t want to recognise.

March 2019 remains a grim reminder of where all this leads to.