How Automated Bots Can Infiltrate Twitter Accounts

by CXOtoday News Desk    Jun 02, 2014

twitter

For those who believe they have been following only real people on Twitter, should think again! They may not be real as automated accounts known as bots can infiltrate the defenses of the social networking site by feeding you links and messages and sway your opinions, according to a recent MIT Technology Review report.

It says automated bots not only can evade detection but also collate many more followers and become influential among various social groups. If you have a Twitter account, the chances are that you have fewer than 50 followers and that you follow fewer than 50 people yourself. You probably know many of these people well but there may also be a few on your list who you’ve never met.

While apparently, Twitter monitors the Twittersphere and any automated accounts that it finds, ultimately, it is unlikely that you are unknowingly following any automated accounts, malicious or not, says Carlos Freitas at the Federal University of Minas Gerais in Brazil and his associates, who have studied how easy it is for socialbots to infiltrate Twitter.

They say that a significant proportion of the socialbots they have created not only infiltrated social groups on Twitter but became influential among them as well. What’s more, Freitas and company have identified the characteristics that make socialbots most likely to succeed, says the report.

The team started by creating 120 socialbots and letting them loose on Twitter. The bots were given a profile, made male or female and given a few followers to start off with, some of which were other bots. The bots generate tweets either by reposting messages that others have posted or by creating their own synthetic tweets using a set of rules to pick out common words on a certain topic and put them together into a sentence.

Freitas and his team found out over the 30 days during which the experiment was carried out, 38 out of the 120 socialbots were suspended. In other words, 69% of the social bots escaped detection.”

During the experiment, the 120 socialbots received a total of 4,999 followers from 1,952 different users. And more than 20 per cent of them picked up over 100 followers, more followers than 46 per cent of humans on Twitter. The researchers also monitored the Klout score - an online service that measures the influence of Twitter accounts - of each of their socialbots to see how they fared.

“We find that the bots achieved Klout scores of the same order of (or, at times, even higher than) several well-known academicians and social network researchers,” the researchers added.

The report notes that the finding that may have significant implications for certain types of groups on Twitter. In recent years a number of services have arisen to measure interest and opinion among Twitter users on a wide variety of topics, such as voting intention, product sentiment, disease outbreaks, natural disasters and so on.

The worry is that automated bots could be designed to significantly influence opinion in one or more of these areas. For example, it would be relatively straightforward to create a bot that spreads false rumors about a political candidate in a way that could influence an election or a senior business executive.

The experiment is also a wake-up call for Twitter. “If it wants to successfully prevent these kinds of attacks, it will need to significantly improve its defense mechanisms. And since this work reveals what makes bots successful, Twitter’s research team has an advantage,” says the report.

 

It is also essential to spot social bots and exclude them without mistakenly excluding human users in the process, which is a huge task. Moreover, with an estimated 20 million fake Twitter accounts already set up, Twitter’s researchers have a lot of work ahead to protect its data.