Twitter ‘bots’ spreading misinformation – Expert Reaction

Twitter bots play an enormous role in spreading misinformation, according to a study conducted during the 2016 US presidential election campaign.

Researchers looked at 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 and found that automated bots spread a third of the articles with the lowest credibility scores.

They also found that bots were key to promoting low-credibility stories just before they went viral. Slashing the numbers of software-controlled social bots could limit the spread of misinformation online, they say.

The SMC asked experts to comment on the study, which is available on Scimex. Please feel free to use these comments in your reporting.

Associate Professor Lech Janczewski, Department of Information Systems and Operations Management, University of Auckland Business School, comments:

“The use of social bots in spreading false information is based on a mechanism identical to that used in Distributed Denial of Service (DDoS) cyber-attacks.

“A DDoS attack is based on creation of bots – networks of devices connected to the internet and infected with special attacking software controlled by an attacker. Almost anyone can buy or find tools online to create bots that act across thousands of terminals, with the power to spread false information to millions. Infection is possible if a device does not have the capabilities to detect malware in the received message.

“Bots exploit the fact that there is no built-in way to verify who is sending information over the internet. The TCP/IP telecommunications protocol – an international set of rules for data transmission – cannot verify senders’ identity unless extra security apps have been installed on the receiver.

“As the first step to protect yourself from bots, switch your device to ‘secure mode’ (seen on your browser by conversion of ‘http’ to ‘https’). That is only possible if the transmitter is ready to use the https mode. However, a skilled attacker may override that as well.

“Significant numbers of bot messages are recognised by SPAM filters but not all of them. Spotting a message coming from a bot can sometimes be tricky.

“Basically, it is up to the user to accept a message after answering ‘yes’ to these two questions:

1.      Do I have a proof that the message came from the source it is claiming?
2.      Is the transmitter known as a source of reputable information?”

Conflict of interest statement:  I am a member of several security organisations including: Chairman of the NZ Information Security Forum (NZISF), and secretary of “Technical Committee No 11 on Security and Privacy Protection in Information Processing Systems” of “International Federation for Information Processing” (IFIP TC-11).

Associate Professor Arvind Tripathi, Information Systems and Operation Management, University of Auckland Business School, comments:

“This study finds that social bots (automated fake accounts on social media platforms) play a disproportionate role in spreading misinformation or fake news.

“We already knew that bots play a role in spreading fake news. However, this study quantifies their disproportionate role and identifies the mechanism they employ in spreading false information.

“This study has analysed 14 million messages to confirm their findings.

Disproportionate roles: 
“Authors found that only 6 per cent of social media accounts, tagged as bots, were responsible for spreading 31 per cent of all tweets linking to fake news (low-credibility articles) and 34 per cent of all fake news (low-credibility articles).

Spreading mechanism:
“Spreading patterns of fake news are less conversational, these are mostly spread via original tweets or retweets by bots, not via replies or comments.

“As fake news spreads, more tweets are concentrated in the hands of few accounts which is opposite to the organic way of spreading the factual information. When information spreads in an organic way, the contribution of any individual account or group of accounts should matter less.

“Social bots post content, interact with each other, and with legitimate users just like real people and so it is easy to get duped on social media platforms believing in fake news.

“Social bots are deployed to promote fake content which comes from low-credibility sources that mimic news media outlets.

“If you are looking at a news or article (link) that is published on a news source that you have never heard of, you should be concerned. There are many fact checking sites that list these dodgy sources but new ones can be created as quickly as old ones are identified.

“The safe approach is to only rely on well-known news media channels such as CNN, New York Times, BBC or known local media sources. For example, if news is only promoted by an obscure source, one should be concerned. One can easily click on that account to see if this account is only promoting these articles or is engaged in other conversations as well.”

No conflict of interest.