News & Analysis

Elections 2024: How AI Can Fool Voters

The story is about US elections slated for 2024, but could hold true for the Lok Sabha Elections scheduled next summer too

Ethicists have for long questioned search algorithms of propagating what one wants to see or read instead of presenting the other side to complete the picture. Today, this threat has multiplied with the advent of Generative AI that could serve as a tool of choice for political parties desirous of polarizing voters. 

An article published by ZDNet quotes Prof. Robert Crossler, an academic whose work around fake news consumption was funded by the National Science Foundation and the US Department of Defense, to suggest that GenAI-led content runs the risk of voters believing something that isn’t true but feel strongly that it is so because of realistic material that supports their viewpoint that they were led to believe. 

The article, helmed by senior contributing editor David Gewirtz, who probed the impact of GenAI on elections in an earlier article, interviewed Prof. Crossler noted that GenAI could target communications specifically to people based on easily gathered knowledge that people share in public. This is already being utilized for social-engineering attacks to hack systems, he said. 

Can GenAI affect elections in India too?

Of course, in India we may argue that it may be tougher to influence voters in India given the linguistic variations prevalent in the country. One could ask, how can GenAI tools help a political party use the same messaging in a constituency south of India and expect it to work the same way in a northern state? 

However, social media and search engines have overcome this challenge long ago, having mastered how to show more of what a user needs across languages. The easiest solution for political parties in India leading up to 2024 general elections could be using GenAI to build narratives and translation tools to structure it according to linguistic requirements. 

What’s the theory behind GenAI and elections?

Prof. Crossler, who works as an associate professor of information systems at Washington State University had authored an article published in the Government Information Quarterly that measured the effects of political alignment, platforms, and fake news consumption on voter concern for election processes.

Crossler’s work has been funded by the National Science Foundation and the Department of Defense. He served as president of the AIS Special Interest Group on Information Security and Privacy from 2019 to 2020. He was also awarded the 2013 Information Systems Society’s Design Science Award for his work on information privacy.

He notes that tools could be used to customize political messaging based on what GenAI can easily determine as interests and motivating factors for voters. Doing this at scale could move the localized target-based communication to a much more granular level of understanding of how users tend to behave. 

There’s an ethical issue, but will politicians listen?

On an ethical front, the professor says technology must not be used to distort the truth or make up alternative truths. How this can be achieved in an ecosystem where policy creation itself could be left panting to meet the pace of tech development is the moot question. Bringing the lawmakers together to understand the challenges will be the key. 

A specific outcome of his research related to the role of social media in spreading fake news and urged users to not get drawn into a particular way of thinking based on what they see on social media. He highlighted the importance of how algorithms work as “these are written to show you more of what you engage with and less of what you don’t.”

Crossler also quoted Microsoft’s report to suggest that foreign actors could use AI in social media to influence voters in any election. The report had suggested that China and Russia were using GenAI deepfakes to control the Ukraine narrative, but the professor feels the tools were available for anyone to use against anyone. 

There’s a solution at hand, but voters need to work for it

The professor gives a fail-proof technique to counter manipulative GenAI. By triangulation of information that involves consuming it from multiple sources that have different biases, readers can arrive at a closer understanding of the reality, he says. In this process, it is critical not to form an opinion immediately upon learning an issue and do so only after triangulating data. 

Crossler further notes that people should be paranoid about public information and seek out interactions with people holding a divergent view. In fact, the media needs to step up and be skeptical about everything where, “the importance of getting the story right should be more important than getting the story first.”  

Furthermore, academic institutions should encourage critical thinking so that they can evaluate news during elections to become a more informed electorate, given that the ability to discern truth would become more difficult with technology advancements. However, the solution is not to move away from tech as it has allowed users without a voice to air their opinions or grievances. 

“The biggest challenge that needs to be addressed, and maybe it is addressed by those who own the generative AI technology, is to somehow inform the world when something is created with that technology. Without knowing what is created with this technology, it is going to be increasingly difficult for humans to be able to discern fact from fiction,” Crossler says. 

He concludes by pointing to the Gen AI potential to improve efficiencies in an election through better communication capabilities as candidates can prepare better for interactions with voters or even prepare for higher quality debates on television.