From Generative AI to General Elections:

The Risks and Realities of GAI

by Francesca Elli

Generative AI, general election, generative AI, general elections. This drumbeat encapsulates the prevailing narrative that the latest new technology, generative artificial intelligence (GAI), poses new and undeniable risks to electoral integrity. Indeed, the European Union (EU) is deeply concerned about GAI’s potential to disrupt elections by spreading disinformation, manipulating voter opinions, and undermining the democratic process through sophisticated, automated deepfakes and misinformation campaigns. Yet, while GAI does pose risks, they are not new, and also very much deniable. The recent Guidelines for providers of Very Large Online Platforms and Very Large Online Search Engines on the mitigation of systemic risks for electoral processes pursuant to the Digital Services Act (DSA) reinforce the oversimplified narrative that online platforms and GAI will negatively affect democratic elections by creating new risks. 

This blog post will argue that, while it is necessary to protect democracies and upcoming elections from disinformation, the DSA’s Guidelines overfocus on GAI to mitigate disinformation in electoral processes by depicting GAI as a new game-changing disinformation machine and overestimating its effects. GAI will make the production of disinformation easier, but this may be less impactful than anticipated. In order to prove this, firstly the guidelines will be presented and analyzed to understand the risks posed by disinformation to electoral integrity. Secondly, this blog will discuss how 1) there is no evidential link between the use of GAI and changes in electoral outcomes, 2) an increase in the production and dissemination of GAI content doesn’t lead to an increase in disinformation, and 3) GAI’s characteristic of creating personalized and tailored content is less persuasive when applied to individuals’ political beliefs. Thirdly, alternative approaches  that the Commission should consider to mitigate the spread of disinformation will be presented. 

The DSA is one of the EU’s latest regulations aimed at creating a safer digital space, mitigating risks and ensuring the protection of fundamental rights of users. The DSA primarily concerns online intermediaries and platforms and applies specific rules for very large online platforms and search engines (VLOPs and VLOSEs) [1]. VLOPs and VLOSEs must carry out risk assessments and implement reasonable, proportionate and effective measures to address and mitigate the systemic risks they identified. These risks include, but are not limited to, risks on civic discourse, electoral processes, and public security. 

For the EU, harms to electoral integrity such as disinformation constitute not only systemic risks, but high priority ones too. In 2024, 50 national elections will take place, including those of the European Parliament, with a fourth of the global population going to the polls. 2 billion voters means billions of political advertisements and content online, making this year an undeniably risky year for our information ecosystem. 

The Guidelines

To support VLOPs and VLOSEs electoral systemic risk assessments and to ensure that they comply with their obligation to mitigate specific risks linked to electoral processes, the European Commission in March 2024 published Guidelines addressing the pervasive impact of disinformation on democratic processes worldwide. Disinformation poses risks to political elections as it has the potential to polarize public opinion, to promote hate speech and, ultimately, to undermine democracies and reduce trust in the democratic processes. Two thirds of EU citizens report coming across fake news at least once a week. The fear is that the disinformation will sway voting outcomes and spread, threatening individuals’ ability to determine the facts that shape their views, affecting society’s public opinion. This becomes problematic if we think of democracy as a system of “government by public opinion.” Mitigating disinformation also follows recital 84 of the DSA, according to which VLOPs/VLOSEs “should also focus on the information, which is not illegal, but contributes to the systemic risks identified in this Regulation,” including “misleading or deceptive content, including disinformation.” 

The Guidelines state that “(a) wide range of phenomena involving online platforms and search engines give rise to a heightened risk to election integrity. These include, but are not limited to the proliferation of illegal hate speech online, threats linked to foreign information manipulation and interference as well as the wider phenomenon of disinformation, the spread of (violent) extremist content and such with the intent to radicalize people, as well as the spread of content generated through new technologies such as generative Artificial Intelligence.” To address these risks, the Guidelines propose several measures such as reinforcing internal processes to ensure that mitigation strategies are appropriately designed for local-context specific risks. The Guidelines also include measures to facilitate access to official information about the electoral process, such as details on voting procedures and locations and to provide users with more contextual information about the content and accounts they engage with. Such information should be provided by VLOPs and VLOSEs by means of information panels, banners, pop-ups, search interventions, links to websites of the electoral authorities, specific election information tabs or a dedicated part of the platform. Furthermore, the Guidelines recommend clearly labeling political advertising, demonetizing disinformation content, and providing free data access for research on electoral processes to relevant third-parties for public scrutiny and research, in line with Article 40 of the DSA [2].

The DSA’s Guidelines also focus on a specific technology, GAI, with 4 out of 26 pages dedicated solely to mitigation measures linked to generative AI. They state that “generative AI can be abused to mislead voters or to manipulate electoral processes by creating and disseminating inauthentic, biased, misleading synthetic content (including text, audio, still images and video) regarding political actors, false depiction of events, election polls, contexts or narratives. Generative AI systems can also produce incorrect, incoherent, or fabricated information, so called “hallucinations”, that misrepresent reality, and which can potentially mislead voters.” To tackle these challenges, the Guidelines suggest limiting the amplification of deceptive, false or misleading content generated by AI in the context of elections through VLOPs recommender systems. The Guidelines also recommend for providers of VLOPs and VLOSEs to monitor and clearly label the use of GAI in advertisements and to adapt content moderation processes and algorithmic systems in such a way as to detect AI generated or manipulated content. 

Analysis 

The DSA’s Electoral Guidelines offer an initial robust response to mitigating possible negative effects on civil discourse and electoral processes. Experts are correct in saying that GAI will amplify the spread and quality of disinformation campaigns, but upon further inspection, it can be argued that its actual impact is being overestimated. Indeed, there is no evidence connecting the use of GAI to changes in electoral outcomes, an increase in the production and dissemination of GAI content doesn’t lead to an increase in disinformation, and GAI’s characteristic of creating personalized and tailored content is not very effective in influencing individuals’ political views.

We are all familiar with the rampant episodes of GAI being used for deepfakes, spoofed audio clips and fake videos of politicians. These examples did happen, but what were their actual effects on electoral outcomes? The DSA’s Electoral Guidelines, especially those on GAI, are very detailed and adopt a proactive approach to risk assessment, yet lack any empirical evidence about the specific risks and potential harms of GAI linked to electoral integrity. While this may not appear as a striking deficiency, it becomes problematic when the premise of the Guidelines is that they are “addressed to providers of VLOPs and VLOSEs whose services bear risks of actual or foreseeable negative effects on electoral processes stemming from the design, functioning, and use of those services within the meaning of Article 34 of Regulation (EU) 2022/2065”. There are numerous examples of GAI being used for malicious purposes but there is little evidence of its actual impact on unfair voter outcomes that tilted the public’s preference for one politician over another. One thing is saying that it is bad to create and circulate fake videos of Macron, another is saying that these videos directly influenced public opinion of Macron, leading to a change in voting behavior. At present it is challenging to demonstrate that GAI has affected a state’s electoral process to the point of undermining the whole democratic process and its institutions.  While this fear is grounded to a certain extent, it does not justify the DSA’s overemphasis on GAI, whose risks cannot be defined as actual or foreseeable negative effects on electoral processes. 

Moreover, studies have found that while GAI may increase the amount of disinformation produced, this doesn’t necessarily lead to a greater consumption by individuals of disinformation, as there is a gap between the limited amount of disinformation and the huge supply of it. While GAI has increased the supply, human demand has remained the same, meaning that generative AI has very little room to operate in. Indeed, on the supply side, the researchers found that the cost of producing and accessing disinformation is already very low, and that a large quantity of disinformation content that currently exists remains unnoticed. On the demand side, in the face of a surge in supply, demand remains unvaried, as the average online user rarely consumes GAI disinformation content, which is rather heavily concentrated in a section of active and loud users. This is important because GAI disinformation, like any type of disinformation, assumes a causal power when humans engage with it. Similarly, while GAI improves the quality of disinformation content, there is little reason to believe that this will lead to an increased public reception or demand of disinformation, as many non-AI technologies have been producing realistic disinformation. In fact, during the recent conflicts in Ukraine and Gaza, fake images have widely circulated online, yet they weren’t AI-created.  Therefore, since GAI does not increase the demand for disinformation, even if it supplies and spreads disinformation, it will not lead to an increase in the consumption or influence of disinformation. Of course, the fact that GAI does not lead to an increased consumption of disinformation content does not mean that we should not worry about it, but rather not demonize GAI and remember that regular disinformation should remain the main area of concern rather than GAI-created disinformation.

It is true that GAI’s novelty, compared to non-AI technologies, lies in its ability to personalize disinformation, facilitating manipulation and micro-targeting of individuals. Yet, persuasion in politics is difficult and shifting voters’ political views is even more difficult. Evidence suggests that these operations have limited persuasive effects when it comes specifically to the political considerations, beliefs and decisions. Indeed, political decisions, such as who you are going to vote for, also have psychological, ethical and moral dimensions. These inherently human traits have been proven to be challenging to program into AI systems, weakening GAI’s potential to influence decisions where such strong considerations are at play. Consider for example the role of familial influence in shaping voting behaviors. The familial transmission of political ideology is rooted in emotional and psychological bonds that are unlikely to be swayed by algorithmically generated content and are shaped by a lifetime of experiences, making them resistant to external manipulation. GAI technologies and large language models currently cannot directly collect or store users’ information and therefore cannot fully create personalized content that represent human values and preferences such as those just listed, making GAI content aimed at political disinformation less effective and persuasive. Therefore, GAI’s main characteristics, to create personalized content that can more accurately persuade individuals, fails when applied to the nuanced context of people’s political views.

Next steps

Rather than focusing on GAI, it would be more effective for the EU to promote a friction-in-design regulatory approach, requiring platforms to limit the GAI capabilities they offer to their users, specifically for the creation of political advertisements. This can be done for example by creating a delay time prior to publishing a social media post, a notice that provides important information along  with a prompt to consider the consequences and implications of a post. If the problem is the dissemination of political content, why not limit the share and comment sections of political ads? Advertisements per se do not require sharing and commenting. This approach would allow online platforms to be more similar to offline spaces, where political messages are presented to individuals who use their critical thinking to evaluate their truthfulness. 

While GAI undoubtedly has the capacity to create and propagate disinformation, its effects are a mere dent in our informational ecosystem. This dent should be closely monitored to ensure that it doesn’t create a wider crack. The Commission’s efforts to limit the spread of disinformation are necessary and laudable, but a more evidence-based approach to tackle GAI and its potential risks is necessary. Rather than overfocusing on the dangers of GAI to democratic processes such as elections, warnings of the possible dangers of GAI must be framed as to not produce excessive distrust in the technology, nor treat it as a new disinformation machine. I would finally like to emphasize that my intention is not to dismiss concerns regarding GAI. Instead, I aim to highlight that the Commission is currently dismissing the notion that GAI may not significantly negatively impact our democracies or elections to the extent previously feared.

[1] They must have more than 45 million users per month in the EU.

[2] The Digital Services Act’s Article 40 introduces provisions for “vetted researchers” to access data from VLOPs and VLOSEs to conduct research on systemic risks in the EU. This move aims to enhance monitoring of platform actions and contributes to the detection, identification and understanding of systemic risks by third-parties.