Social media weaponisation in elections: The UK, Sri Lanka, and New Zealand

In August 2020, I wrote in to a technical consultation in the United Kingdom looking at digital imprints in the context of election campaigns. As Chloe Smith MP, the then Minister of State for the Constitution and Devolution noted in the Foreword to the consultation’s brief,

…there is growing concern about the transparency of the sources of political campaigning online, which is starting to have a negative impact on trust and confidence in our elections and democracy. The Government committed in its last manifesto to protect the integrity of our democracy. That is why this Government will refresh our election laws so that citizens are empowered to make informed decisions in relation to election material online. An important stride towards delivering on this pledge is introducing a regime for ‘digital imprints’ – the requirement for digital election material to explicitly show who is promoting it and on whose behalf. Imprints are already required for printed election material, so extending it to digital election material is a natural next step. It will strengthen public trust and ensure voters are informed about who is behind a campaign. This consultation outlines the rules for how digital imprints would operate. The proposals would involve the UK introducing some of the most comprehensive digital imprinting rules in the world.

I was a PhD candidate at the time, based in New Zealand. I didn’t believe the consultation could or would accommodate my input, since it was limited to “voters, social media and technology companies, political parties, prospective or elected representatives and civil society organisations throughout the United Kingdom”. I wrote in nevertheless to register what in the Global South, and Sri Lanka, was already evident as the weaponisation of social media to undermine electoral integrity, and in ways far beyond campaigns, and propaganda defined or conceived of in any traditional sense.

2024 is a consequential election year for Sri Lanka, and all the points in my submission remain exploitable vectors by domestic, and foreign actors interested in undermining our democracy, and electoral integrity.

Two months after I penned that letter, New Zealand went to the polls, in October 2020. I studied the social media campaigns on Facebook, Twitter, and YouTube at the time, and found no cause for concern. In October 2023, I studied New Zealand’s general election in far more detail as part of my on-going research into influence operations, truth decay, and disinformation in the country. Everything in the letter to the UK consultation was realised in New Zealand’s last general election, including the instrumentalisation of Facebook ad tech for micro-targeting of voters, dark campaigns, dark money, astroturfing, sock-puppets, synthetic media use (i.e., content based on generative AI), and major issues with campaign finance, regulatory, and legal frameworks that were completely outdated.

In short, what I found was that New Zealand is no better than Sri Lanka when it comes to institutional safeguards, and official guardrails against contemporary, and evolving threats to electoral integrity, and democracy. This would be the case for many other countries too.

My August 2020 submission to the UK consultation can be distilled to ten points, stressing that it was limited to digital imprints, and not a broader set of issues dealing with social media’s deleterious impact on democracy, especially in the hands of malevolent actors.

  1. Difficulty in distinguishing paid and unpaid digital campaign material: The complex interplay between various actors (political entities, civil society, influencers, and media) makes it challenging to categorize content as paid or unpaid.
  2. Constant campaign on social media: Partisan content and propaganda are not limited to official campaign periods but are a daily occurrence on social media platforms. I wrote about constant campaigns more recently too using agenda-setting theory to help communicate how powerful, partisan networks shape electoral outcomes.
  3. Accessibility vs. quality of content: While digital campaigning makes politics more accessible, the content can be problematic for democratic outcomes and debates, often drowning out critical issues and marginalising smaller parties or candidates.
  4. Third-party involvement: Campaign material produced or promoted by third parties (influencers, pseudonymous entities, marketing companies, etc.) dominates the landscape and should be included in regulatory oversight.
  5. Timely action by social media companies: Takedowns of problematic content often occur long after the election, highlighting the need for a time horizon for social media companies to act on reported content.
  6. Emerging challenges: The regulation should address new issues such as apps created by political parties or politicians and the shift of campaign material to encrypted instant messaging platforms.
  7. Imprints on instant messaging: Political campaign material designed for instant messaging should carry imprint information accessible within the apps themselves.
  8. Territoriality: Social media companies should catalog material aimed at domestic constituencies that may be sponsored by foreign entities.
  9. Interplay between different media: Regulation of digital campaigns must consider the complex interplay between airwaves, print, social media, and private exchanges over instant messaging.
  10. Weaponisation of social media polls: The proposed regulations should address the weaponisation of social media polls, which can undermine the integrity of the electoral process.

These are even more pressing concerns in 2024. Neither Sri Lanka nor New Zealand – two countries, and contexts I now have subject-domain expertise in – have even a scaffolding of safeguards to address what are risks, and threats today, leave aside what generative AI alone is contributing to, and complicating at pace.

As far back as 2020, in what I studied for doctoral thesis, I noted how the information environment during that year’s general election in Sri Lanka were characterised by a wide array of problematic content, including propaganda disguised as voter education, promotional material for candidates and parties in the form of photo and video banners, proxy pages, and campaigns on gossip and meme pages. I also flagged,

  1. Mock polls pitting candidates against each other, overt or thinly veiled propaganda on terrestrial TV, and ongoing campaign spending on proxy, gossip, and meme pages further contributed to the spread of misleading information.
  2. The use of re-featured and organic posts not captured in Facebook’s Ad Library allowed for the promotion of partisan content outside of the platform’s oversight mechanisms, while the engineering of virality and greater reach over proxy and gossip pages aimed to mislead voters.
  3. The presence of borderline, partisan, or propagandistic content on gossip and meme pages raised questions about the need for remedial measures.
  4. Even when Facebook rejected and took down campaigns, the content had already been seen by many users before its removal.

In 2023, I studied how what was engineered in Sri Lanka, and more broadly speaking, the Global South had been perfected for, and realised in New Zealand’s general election campaigns.

My submission to the UK’s technical consultation can be read as a PDF.

The submission references recommendations penned for, and sent to Sri Lanka’s Elections Commission penned after the Presidential Election, as far back as 2019. While Meta has now addressed some of what’s mentioned in that letter, the larger, structural problems prevail, including with the instrumentalisation of their products, and platforms in ways that undermine electoral integrity, have gotten far worse. This applies to both Sri Lanka, and New Zealand.

In the UK, the 2020 consultation has evolved into statutory guidance on digital imprints, exercised by the Electoral Commission. In the context of generative AI’s impact on elections, and for example generative AI voice based campaigns now banned in the United States, it’s still behind the curve. However, it’s more comprehensive in its definitions, and grasp of contemporary risks, and threats to electoral integrity than any regulatory or legislative framework I’m aware of in Sri Lanka or New Zealand.

And that’s concerning.

###

First published on LinkedIn.