Skip to content
Article Article August 30th, 2017
Technology

The good, the bad and the ugly uses of machine learning in election campaigns

Article highlights


The next level of digital transformation involves AI and election campaigns

Share article

There is evidence to suggest that AI has been systematically misused to manipulate citizens

Share article

There are many examples of how AI can enhance election campaigns in ethical ways

Share article

Partnering for Learning

We put our vision for government into practice through learning partner projects that align with our values and help reimagine government so that it works for everyone.

Partner with us

There has never been a better time to be a politician - unless, of course, you can be a machine learning engineer working for a politician.

Throughout modern history, political candidates have tended to have limited tools to gauge the sentiments and opinions of the electorate. More often than not, they had to rely on instinct rather than insight when running for office.

The advent of big data and its application in political election campaigns changed that. Most prominently, it was the 2008 US presidential election that first relied on large-scale analysis of social media data, which was used to improve fundraising efforts and to coordinate volunteers. Now, the next level of this digital transformation involves the integration of artificial intelligence (AI) systems into election campaigns alongside nearly all other aspects of political life.

Already today, machine learning systems can predict which US congressional bills are likely to pass. Algorithmic assessments are being implemented in the British criminal justice system. And most strikingly, machine intelligence solutions are now being carefully deployed in election campaigns to engage voters and help them be more informed about key political issues.

But as we approach an election climate in which everything from voter intelligence to voter targeting and conversational engagement can be automated, we need to ask ourselves: are we putting our democracy at risk by putting too much trust into AI systems? How far should we go in the integration of machines into the human side of democracy?

These ethical questions are especially pertinent, given the recent press coverage investigating the dark side of campaign technologies in the Brexit referendum and the 2016 US presidential election. In particular, there is evidence to suggest that AI-powered technologies have been systematically misused to manipulate citizens. And some people claim that they were a decisive factor in the referendum and election results. This is a disquieting trend.

Attack of the bots

First, the use of AI to manipulate public opinion: massive swarms of political bots were used to spread propaganda and fake news on social media. Bots are autonomous accounts that are programmed, in the political arena, to spread one-sided political messages creating the illusion of public support.

Typically disguised as ordinary human accounts, bots have been responsible for spreading misinformation and contributing to an acrimonious political climate on sites like Twitter and Facebook. They are very effective at attacking voters from the opposing camp and even discouraging them from going to the voting booth.

For example, bots regularly infiltrated the online spaces used by pro-Clinton campaigners to spread highly automated content, generating a quarter of Twitter traffic about the election. With a massive storm of messages, they were able to choke off dissent from social media and thereby support the Trump campaign.

Bots were also largely responsible for taking #MacronLeaks to social media just days before the French presidential election. They swarmed Facebook and Twitter with leaked information that was mixed with falsified reports, to build a narrative that Emmanuel Macron was a fraud and a hypocrite - a common tactic when bots are used to push trending topics and dominate social feeds.

The dark side of political AI

Second, the use of AI to manipulate individual voters: during the US presidential election, an extensive advertising campaign was rolled out that targeted persuadable voters based on their individual psychology. This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people's emotions.

The problem with this approach is not the technology itself, but rather the covert nature of the campaign and the blatant insincerity of its political message. Different voters received different messages based on predictions about their susceptibility to different arguments.

A presidential candidate with flexible campaign promises was, of course, particularly well suited to this tactic. Every voter could receive a tailored message that emphasised a different side of the argument. There was a different Trump for different voters. The key was just finding the right emotional triggers for each person to drive them to action.

The damage to democracy

Informational warfare is obviously not a new phenomenon. For instance, medieval pamphlet wars are one of the earliest examples of large-scale propaganda campaigns. But the nature and scale of computational propaganda is simply unprecedented. In fact, the nefarious application of AI in elections raises much larger questions about the stability of the political system we live in.

A representative democracy depends on free and fair elections in which citizens can vote with their conscience, free of intimidation or manipulation. Yet for the first time ever, we are in real danger of undermining fair elections - if this technology continues to be used to manipulate voters and promote extremist narratives.

Towards human-centred AI

It is easy to blame AI technology for the world's wrongs (or for lost elections), but there's the rub: the underlying technology is not inherently harmful in itself. The same algorithmic tools used to mislead, misinform and confuse can be repurposed to support democracy and increase civic engagement. After all, human-centred AI in politics needs to work for the people with solutions that serve the electorate.

There are many examples of how AI can enhance election campaigns in ethical ways. For example, we can program political bots to step in when people share articles that contain known misinformation. We can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds. And most importantly, we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives.

An alternative scenario for restricting computational propaganda is through greater regulation. Stricter rules on data protection and algorithmic accountability could also reduce the extent to which machine learning can be abused in political contexts.

But regulation always moves slower than technology. When regulators finally start discussing the legal frameworks for AI in politics, let's hope we have some democratically elected leaders left.

 

The Centre for Public Impact is investigating the way in which artificial intelligence (AI) can improve outcomes for citizens. 

Are you working in government and interested in how AI applies to your practice? Or are you are an AI practitioner who thinks your tools can have an application in government? If so, please get in touch. 

FURTHER READING

Written by:

Dr Vyacheslav Polonski Network Scientist at the University of Oxford, Founder and CEO of Avantgarde Analytics
View biography
Share this article: