According to the study by researchers at Indiana University, social bots use various manipulation strategies to trick real users into spreading fake news, including: amplifying news in its early stages before going viral, targeting influential users directly through replies and mentions, and disguising their geographic locations.
To reach this conclusion, researchers computed a “bot score” that compared nearly 1,000 accounts that actively shared false news, with a random sample of accounts that had posted one link to a false claim. From the comparison they then calculated a bot score which would reflect the extent to which a given Twitter account exhibited bot characteristics.
The findings show that the most active “super spreaders” had much higher bot scores than the random sample – suggesting that they are likely “social bots that automatically post links to articles, retweet other accounts, or perform more sophisticated autonomous tasks, like following and replying to other users.”
Given that writing and sharing fake news can be such a lucrative business, the implications of this study are extremely far-reaching – well beyond the political spectrum. In an age of virality, fake news can and has been spread about specific individuals or companies – often for a quick payday.
Consider the example of Karri Twist, a London-based Indian restaurant, that was almost forced to close its doors after a rumor was spread that it was selling human meat. Or the example of “pizzagate,” in which a Washington pizzeria was falsely accused of holding young children as sex slaves. As these examples suggest, businesses are not immune to the spread of fake news, and if not protected properly, irreparable damage can occur.
This idea was echoed in a recent post by Adweek, where Jenny Wolfram, CEO of BrandBastion, discussed the “Big Social Media Villains and How to Fight Them,” and reminded us of the danger of these social bots and fake accounts.
She writes, “Bots operating with an agenda on social media pose a big threat to brands. And when false accounts and bots are employed by spammers and scammers, they act as a powerful tool for spreading harmful content.”
While there is yet to be consensus on how to fight these ugly creatures, Wolfram provides the example of fighting fire with fire by developing Twitter bots to hunt and distract their malicious counterparts. The study alternatively suggests partnering with social media platforms to curb social bots at the source or employing a CAPTCHA system to ensure human authentication.
Ultimately, this research and the examples above should serve as a call to action for individuals and companies alike. Brand image is something that needs to be carefully curated.
Therefore, instead of falling victim to fake news and a PR crisis, take a moment to carefully review your online activity, as no one likes fake news.