The Bot Invasion
January 29, 2018
Just this week, U.S. lawmakers called their continued attempts to manipulate political discourse on Twitter and Facebook an “ongoing attack by the Russian government.” Some think they may have helped tilt the 2016 presidential election. Regularly cited USC research has shown that they make up as much as 15 percent of the accounts on Twitter. USC experts discuss the ways the legions of automated online armies known as “bots” are negatively impacting society in ways both subtle and overt.
They are dominating the conversation
“This is only one of the scenarios where social bots have been used for malicious purposes – we have looked at the spreading of conspiracy theories, anti-science campaigns, climate change denials, anti-vaccination campaigns, and also how social bots have been used to manipulate the stock market and to create mass hysteria during disasters and emergencies.” (Adapted from comments to TechCrunch)
Emilio Ferrara’s frequently cited research has shown that about 15 percent of Twitter is composed of bots, and that they also accounted for nearly 20 percent of all political tweets that were generated in the weeks leading up to the 2016 election. His research has also shown that bots can effectively spread positive messages. He can discuss the technical mechanisms through which “fake news” is distributed to consumers on social platforms like Twitter and Facebook. He is a research assistant professor for the Information Sciences Institute at the USC Viterbi School of Engineering.
They are hurting our health
Jon-Patrick Allem can discuss how fake social media accounts on Twitter and other platforms sometimes promote an unhealthy message not based on scientific findings. His research gleans insight from posts on Twitter, Instagram, YouTube and Facebook, as well as from terms used in a Google search. Allem is a research scientist in the Department of Preventive Medicine at the Keck School of Medicine of USC.
They are making us hate
“Digital media platforms like Google and Facebook may disavow responsibility for the results of their algorithms, but they can have tremendous — and disturbing — social effects. Racist and sexist bias, misinformation, and profiling are frequently unnoticed byproducts of those algorithms. And unlike public institutions (like the library), Google and Facebook have no transparent curation process by which the public can judge the credibility or legitimacy of the information they propagate.
“It’s impossible to know the specifics of what influences the design of proprietary algorithms, other than that human beings are designing them, that profit models are driving them, and that they are not up for public discussion. It’s time we hold these platforms accountable and perhaps even imagine alternatives — such as regulation of search engines — that uphold the public interest.” (Excerpted from The Chronicle of Higher Education)
Safiya Umoja Noble can discuss how large media and technology companies and the bots that exploit them feed hateful online behavior, such as the 2016 incident involving Microsoft’s racist Twitter bot. NYU Press is publishing a forthcoming book by Noble about bias in automation called “Algorithms of Oppression.” She is an assistant professor of communication at the USC Annenberg School for Communication and Journalism.
They are stealing our children’s toys
Pai-Ling Yin can discuss how bots were used during the 2017 holiday shopping season to scoop up popular toys and re-sell them at an exorbitant mark-up, as well as how this kind of technology will continue to impact the online sales of event tickets, apparel and other popular items. She is an associate professor of clinical entrepreneurship and director of the Technology Commercialization Initiative at the USC Marshall School of Business.