Generative AI, and Sri Lanka’s 2024 presidential election: Insights from India’s general election

There’s been one question I’ve been asked the most since January this year, and in relation to Sri Lanka’s upcoming presidential election, followed by a general election: What role will generative AI play?

It is a question I tackled first in a lecture I delivered in Sinhala as far back as July 2023, which dealt with how synthetic media would play out in Sri Lanka’s enduring democratic deficit, and the post-aragalaya socio-political dynamics. The passage of the Online Safety Act (OSA) early 2024 significantly complicated matters.

In March, I shared deepfake audio samples with some leading figures from Sri Lanka’s business, media, and legal community. It went as expected.

One rated the believability of something I shared as 8/10, which horrified the person because it was related to the generation of content which appeared to be from a person who was intimately known, for decades. In other instances, content which appeared to be from leading political, business, and media figures helped communicate how generative content in Sinhala, Tamil, and in various forms ranging from text to audio could easily shape, and shift perceptions, and reactions of specific audiences, communities, and markets.

Based on research I continue to do in New Zealand on generative AI’s impact on social cohesion, and democracy, I ran through some sector, industry, institution, and election specific scenarios using both shallowfake, and deepfake content in just audio form, with WhatsApp as a primary vector for distribution, could be used for voter enragement, distraction, suppression, and targeted harassment campaigns. Even electoral oversight bodies in mature democracies like Australia have admitted they can’t deal with the resulting risk, and threat surfaces.

In May, I looked in some detail at the distribution of a synthetic video framing the National People’s Power or Jathika Jana Balawegaya party. I noted that “The production, and uncritical promotion of shallowfakes in the lead up to Sri Lanka’s consequential Presidential Election later this year will very likely worsen, at pace.” In a section dealing with the further weaponisation of WhatsApp in Sri Lanka, I went on to say that “…in a high media consumption, low media literacy, highly divided society such as Sri Lanka, shallowfakes would constitute a greater threat to democracy, and electoral integrity, than what continues to be a higher bar around the creation of deepfakes. “

I combined this with a theory called the ‘devil shift’, where actors perceive their opponents as more powerful, and malevolent than they truly are. In a lecture in Dhaka to students of Pathshala South Asian Media Academy, I said this was especially a risk in hyper-partisan political campaigns featuring synthetic media in deeply divided societies, and contexts marked by a significant democratic deficit.

Disinformation foundations remain very strong in a country that is defined by a truth decay so acute, it is no longer possible to easily or quickly distinguish between satire, political speech, and journalism. This aids the further development of disinformation campaigns that already, have resulted in the UN having to issue an unprecedented clarification around a fake report, attributed to UNHCR, around the upcoming presidential election.

This grounded, context, and country specific appreciation of how generative AI may come to be (ab)used in Sri Lanka’s presidential election campaign is borne out by what happened – or more accurately, didn’t happen – in India’s recently concluded general election. Nilesh Christopher’s excellent article in The Atlantic (sadly pay-walled) observes that while deepfakes were a significant concern in the lead up to the election, they were not as prevalent or destructive as initially feared. Instead, AI was primarily used by politicians and campaigns to disseminate their messages, create emotional appeals, and persuade voters with hyper-personalised content.

For those without a subscription, Christopher makes five key points around the use of gen-AI, and synthetic media in the Indian election campaign.

  • Voice cloning: Arvind Kejriwal, a top candidate, had his voice cloned to deliver a message to his supporters while he was in jail. The AI-generated voice was convincing, although it was not Kejriwal’s actual voice.
  • Satirical content: Modi’s supporters mocked Kejriwal by sharing an AI-generated montage of images showing him singing a melancholic Hindi song while strumming a guitar in a prison cell.
  • Resurrecting deceased politicians: The team of a Congress Party candidate used AI to resurrect his deceased father, a former member of Parliament, in a campaign video where he endorsed his son as his “rightful heir.”
  • Memes and trolling: Official social media accounts of political parties shared numerous AI-augmented posts for jest, trolling, or satire. For example, Modi retweeted an obviously AI-created video of himself dancing to a Bollywood tune.
  • Personalised AI robocalls: AI-generated calls enabled politicians like Modi to endorse candidates in languages they don’t speak. Local leaders also used AI to deliver personalised campaign calls in regional dialects, addressing voters by name. Over 50 million AI-generated calls were estimated to have been made in the months leading up to the election.

Of these, it is likely that cloned voices, sent over WhatsApp, will emerge as a phenomenon the closer we get to election day in Sri Lanka (which at the time of writing, hasn’t yet been set). Memes, and trolling already defined the 2019 presidential election, and 2020’s general election – which I studied for my PhD. Gen-AI will make the production of these even more prolific, and for some, profitable. Robocalling has never been a defining feature of Sri Lankan elections – we use our smartphones to engage with political content on social media, rather than to listen to politicians call us. In a country so deeply divided, the risk of robocalls at scale is also that you lose votes, through angering constituencies who don’t want to hear from politicians any more than they do.

It’s possible that on the lines of the NPP video studied earlier this year, satirical content featuring shallowfakes generated through smartphone based apps, like Viggle videos in India), the free-tiers of generative AI web platforms, or trial packages (which allow for all the features to be used for a limited time for free) will be much more persistent, and prevalent during the campaign.

Given what we saw during the last presidential campaign around augmented reality apps, and candidate specific apps, it is highly likely that authorised AI-generated content from political parties will feature satirical, entertaining, emotionally engaging content to make the candidates more relatable, and also act as reputational laundromats. These may or may not be watermarked.

On 26 November 2019, I wrote to Sri Lanka’s elections commission with a list of recommendations based on the deep study of disinformation during the consequential presidential election that year, which was the most technologically sophisticated ever. Some of the recommendations are even more valid in today’s context, and especially under the Personal Data Protection Act (PDPA). 2024’s campaign will invariably eclipse 2019’s campaign technology through generative AI use, and synthetic media. If the Elections Commission has no visible investments I can see to tackle what were major issues in 2019, it’s likely they will be completely unable to respond meaningfully to what the coming months will bring about.

I’ve noted that “On WhatsApp, it’s near impossible to run content through Google Reverse Image search – which is much easier on desktop or laptop. This means that even those who may be a tad sceptical, still can’t easily check for authenticity. Shallowfakes can be ‘satisficing‘, as evidenced by the two tweets above, and responses to them I’ve not shared – but clearly suggest this video was taken at face value.” This makes it very likely that synthetic media that’s not very sophisticated – like the video featuring a shallowfake of Trump endorsing the NPP – will be made specifically for, and very widely circulated via WhatsApp. The sheer volume of content repeating partisan messages, like what was done in India with the BJP’s 5 million accounts alone, may lead people to believe and trust whatever is presented, independent of veracity, and verifiability.

Constituent engagement strategies that enhance reflexive sharing – which in a campaign can mean the viral distribution of, and engagement with content far more than the critical engagement with it – will play a key role in the strategic adoption, and adaptation of generative AI.

Like the fake UNHCR report, deception – as a cornerstone of this strategy – will be vastly enhanced by generative AI’s ability, and largely for free or very little cost, to create synthetic media at an industrial scale. Instrumentalising Sri Lanka’s very poor media literacy coupled with the high consumption of political content especially in visual forms (from posters on walls, to late night talk shows on TV, and memes on Facebook), generative AI will be used to distract, decry, deny, and deceive through sophisticated influence campaigns that are active all the time, every day.

In what I’ve often called a ‘constant campaign’,

To wait for, and just study election campaigns to capture the pulse of electorate completely erases the impact sustained contact campaigns have on Sri Lanka’s voting public – who are being clandestinely, and constantly nudged to be moved by certain topics, issues, and things, and consider less important others, which are completely erased. Some readers will find resonance in Chomsky’s theory of media, and manufactured consensus, which in 2018 he revised to note how social media was a double-edged: “Sometimes, they are used for constructive purposes. But they have also become major forces for undermining democracy.”

If, like in India’s general election, unemployment, existential concerns, and youth specific matter when it comes to the selection of Sri Lanka’s next President, in what’s also going to an electorate with many first time voters who experienced 2022’s aragalaya, generative AI’s potential for synthetic media created to directly address specific demographics, as well as the production of content that follows Steve Bannon’s infamous advise to ‘flood the zone with shit’ will both be a feature in the campaign.

One seeks to persuade, and reinforcing ideologies, attitudes, perceptions, and beliefs. The other seeks to disorient, and amplify apathy. Both will be simultaneously present on social media surfaces.

The template for all this isn’t new in Sri Lanka. It was present in, and architected by the SLPP, and Rajapaksas in the lead up to the 2019 election, and in ways that were unprecedented. The foundations will be built on with synthetic media, and research which clearly suggests that generative AI augments persuasion through pervasive content production.

It’s going to be a very interesting, and challenging time for those who are keen to strengthen Sri Lanka’s electoral integrity in the weeks, and months ahead.

Photograph in header courtesy New Scientist magazine.