AI-Driven Election Threats: What’s Ahead in 2024
Enhanced by the wave of generative AI tools launched throughout the past year, media disinformation is a looming problem in the 2024 United States presidential election. A recent poll found that more than half of Americans expect false information spread by AI to influence the 2024 presidential election.
In light of internal investments by large tech companies in election integrity initiatives, what strengthens disinformation’s threat is that new AI companies created lack the capacities to manage election-related risks.
What have been the consequences so far?
Ryan Heath of Axios reported that AI deepfakes that are already posing problems in elections. In the Slovakian election on September 30 last month, a fake video of the defeated candidate buying votes circulated.
Additionally, Heath writes, “Audio deepfakes became a flashpoint” at the U.K. Labour Party’s annual conference, when fake audio of Keir Starmer — the poll favorite to become Britain’s next prime minister — was circulated purporting to show him bullying staff and criticizing the conference’s host city.”
What could be other consequences?
In an article from the Electronic Privacy Information center, the authors Cali Schroeder and Ben Winters, outline possible situations that companies and policy makers should address:
- “AI systems and the content they generate can be combined with targeted lists of people and their contact information from data brokers. This would enable bad actors to target financially or otherwise vulnerable groups – like the poor, elderly, minority groups, and more – with content specifically tailored to manipulate them based on fears, stereotypes, or other individualized characteristics…
- Bad actors can create and tailor the content of the messages using generative AI to more effectively manipulate different groups and to effectively evade spam filters that would otherwise identify widely repeated messages and prevent some spam.”
What’s being done?
- Maria Ressa, a Nobel Peace Prize laureate, along with Camille François, a renowned researcher who exposed Russia’s 2016 election disinformation campaign, launched an innovation lab at Columbia University and Sciences Po in Paris. The lab is a component of a digital literacy initiative supported by a $3 million grant from the French government.
- A coalition of ten civil society organizations developed a “framework [that] takes a three-pronged approach in its appeal to Big Tech platforms, including recommendations for bolstering resilience, countering election manipulation, and leaving ‘paper trails’ that promote transparency.”
- Platforms such as Nooz.ai have introduced functionalities that conduct linguistic analysis on news articles and official documents, aiding users in identifying attempts at manipulating information.