Uncategorized

Millions of political tweets from fake accounts, USC study finds

November 08, 2016

twitter-botThis message ‘bot’ to you: A deep scientific analysis of tweets this #Election reveals that fake Twitter accounts — “bots” — generated a fifth of all political tweets and are swaying the discourse, according to scientists at the USC Information Sciences Institute.

Contact: Ian Chaffee at (213) 740-8606 or ichaffee@usc.edu; or Amy Blumenthal at (213) 821-1887 or amyblume@usc.edu

They are used to help businesses better interact with customers. They have accidentally released FBI documents to the public. And a new study shows that they have some pretty strong political opinions.

They are bots — those fake accounts that flood Twitter with automatically generated content, and they made up nearly one-fifth of the political discourse on the microblogging service this campaign season, according to the newly published findings by computer scientists at the USC Viterbi School of Engineering’s Information Sciences Institute.

“Twitter has become the main information source for a significant fraction of Americans. It’s the platform many people use to get their daily news, spanning many topics, but importantly, including politics, policy, and social issues,” said Emilio Ferrara, a research assistant professor at the USC Information Sciences Institute. “We need to guarantee that this platform is reliable and that it does not compromise the democratic political process by fostering the spread of rumors or misinformation.”

‘Bot or not’

For the study published Nov. 7 in the computer science journal First Monday, Ferrara and USC visiting research assistant Alessandro Bessi analyzed 20 million election-related tweets collected continuously and without interruptions in three periods between Sept. 16 and Oct. 21, 2016 by querying the Twitter Search API. “Political tweets” were those classified as matching 23 different political keywords or hashtags, including #NeverHillary and #NeverTrump.

Ferrara and Bessi then ran these tweets through the “Bot or Not” algorithm that Ferrara first developed with colleagues at Indiana University Bloomington in 2014. They found that Twitter bot accounts produced 3.8 million tweets, or 19 percent of all election tweets for the study’s period. Social bots also accounted for 400,000 of the 2.8 million individual users tweeting about the election, or nearly 15 percent of the population under study.

“The presence of these bots can affect the dynamics of the political discussion in three tangible ways,” Ferrara wrote. “First, influence can be redistributed across suspicious accounts that may be operated with malicious purposes. Second, the political conversation can become further polarized. Third, the spreading of misinformation and unverified information can be enhanced.”

Human-like

Twitter bots have become so sophisticated that they can tweet, retweet, share content, comment on posts, “like” candidates, grow their social influence by following legitimate human accounts, and even engage in human-like conversations.

Because of social bots’ sophistication, it is often impossible to determine who creates them, although political parties, local, national and foreign governments and “even single individuals with adequate resources could obtain the operational capabilities and technical tools to deploy armies of social bots and affect the directions of online political conversation,” the researchers wrote.

Also as part of their study, Ferrara and Bessi examined the expressions of positivity and negativity in political discourse, on a scale ranging from minus 4 (maximum negativity) to plus 4 (maximum positivity), generated by both bot and human tweets during this time. Both human and bot tweets about Trump were almost uniformly positive when compared to those about Clinton, which were equally neutral and positive.

Trump had a significantly higher number of bot supporters, with Georgia leading all states in total amount of bot-generated political tweets. New York, Mississippi and Florida also showed above average political bot activity.

Ferrara believes Twitter can be a bellwether for how disinformation and misrepresentation might be generated automatically across other online and social platforms.

“Other social networks like Facebook are finding it challenging to validate information sources as well; our research community will need to devote more efforts in these directions in the future,” he said.

The paper is the latest from Ferrara, who studies social network behavior and recently co-authored a review article for the Communications of the ACM about the use of bots to infiltrate political discourse. The project was self-funded.

[Image credit: Flickr Creative Commons by Bernard Goldbach]

#    #    #