We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


In a paper published last year in the journal Nature Communications, researchers at the City College of New York investigated the spread of fake news on Twitter in the months leading up to the 2016 U.S. elections. Drawing from a dataset of 171 million tweets sent by 11 million unique accounts, the team attempted to quantify the importance of tweets and top “news” spreaders, concluding that bots tweeted links to websites containing fake news at rates higher than any other group.

This insight wasn’t new — the role bots play in disseminating false and misleading information was already well-established. Research by Indiana University scientists found that over a 10-month period between 2016 and 2017, bots targeted influential users through replies and mentions in order to surface untrue stories before they went viral. During the 2017 Catalan referendum for independence in Spain, bots generated and promoted violent content aimed at users calling for independence.

But as the U.S. 2020 elections approach, experts are concerned that bots will evade even fairly sophisticated filters to amplify misleading information, disrupt get-out-the-vote efforts, and sow confusion in the election’s aftermath. While people in academia and private industry continue to pursue new techniques to identify and disable bots, the extent to which bots can be stopped remains unclear.

Campaigns

Bots are now used around the world to plant seeds of unrest, either through the spread of misinformation or the elevation of controversial points of view. An Oxford Internet Institute report published in 2019 found evidence of bots spreading propaganda in 50 countries, including Cuba, Egypt, India, Iran, Italy, South Korea, and Vietnam. Between June 5 and 12 — ahead of the U.K.’s referendum on whether to leave the EU (Brexit) — researchers estimate half a million tweets on the topic came from bots.

U.S. social media continues to be plagued by bots, most recently targeting the coronavirus pandemic and the Black Lives Matter movement. A team at Carnegie Mellon found that bots may account for up to 60% of accounts discussing COVID-19 on Twitter and tend to advocate false medical advice, conspiracy theories about the virus, and a push to end lockdowns. Bot Sentinel, which tracks bot activity on social networks, in early July observed new accounts promoting Black Lives Matter disinformation campaigns, including false assertions that billionaire George Soros is funding the protests and that the killing of George Floyd was a hoax.

But the activity perhaps most relevant to the upcoming elections occurred last November, when “cyborg” bots spread misinformation during local Kentucky elections. (Cyborg accounts attempt to evade Twitter’s spam detection tools by transmitting some tweets from a human operator.) VineSight, a company that tracks social media misinformation, uncovered small networks of bots retweeting and liking messages that cast doubt on the gubernatorial results before and after polls closed.

A separate Indiana University study sheds light on the way social bots like those identified by VineSight actually work. The bots pounce on fake news and conspiracy theories in the seconds after they’re published and retweet them broadly, encouraging human users to do the subsequent retweeting. Then the bots mention influential users in an effort to spur them to reshare (and in the process legitimize and amplify) tweets. The coauthors spotlight a single bot that mentioned @realDonaldTrump (President Trump’s Twitter handle) 19 times, linking to the false claim that millions of votes were cast by illegal immigrants in the 2016 presidential election.

Why are humans so susceptible to bot-driven content? The Indiana University study speculates that novelty plays a role. Novel content, which attracts attention because it’s often surprising and emotional, can confer social status on the sharer, who is seen as someone “in the know.” False news also tends to inspire more surprise and disgust than truthful news does, motivating people to engage in reckless sharing behaviors.

Recognizing this psychological component, Twitter recently conducted an experiment that encouraged users to read the full content of articles before retweeting them. Beginning in May, the social network prevented a subset of users from retweeting any tweet containing a link before clicking to open that link. After several months, Twitter concluded that the experiment was a success: Users opened articles before sharing them 40% more often than they did without the prompt.

Not all bots are created equal

In anticipation of campaigns targeting the 2020 U.S. elections, Twitter and Facebook claim to have made strides in detecting and removing bots that promote false and harmful content. Twitter head of integrity Yoel Roth says the company’s “proactive work” has led to “significant gains” in tackling manipulation across the network, with the number of suspected bot accounts declining 9% this summer versus the previous reporting period. In March, Facebook revealed that one of its AI tools has helped identify and disable over 6.6 billion fake accounts since 2018.

But while some bots are easy to spot and take down, others are not. In a keynote at this year’s Black Hat security conference, Stanford Internet Observatory research manager Renee DiResta noted that China-orchestrated bot campaigns tend to be less effective than Russian efforts. That’s partly because it remains difficult for China to apply certain tactics to Western platforms banned in China. DiResta pointed out that Chinese bots often have blocks of related usernames, stock profile photos, and primitive biographies.

But while Russia’s high-profile social media efforts have smaller audiences on the whole (e.g., RT’s 6.4 million Facebook followers versus China Daily’s 99 million), they see engagement “an order of magnitude higher” because of their reliance on memes and other “snackable” content. “Russia is, at the moment, kind of best in class at information operations,” DiResta said. “They spend a fraction of the budget that China does.”

As early as March, Twitter and Facebook revealed evidence suggesting Russian bots were becoming more sophisticated and harder to detect. Facebook said it took down a network of cyborg accounts — posting about topics ranging from Black history to gossip and fashion — that were operated by people in Ghana and Nigeria on behalf of agents in Russia. And some of the accounts on Facebook attempted to impersonate legitimate nongovernmental organizations (NGOs). Meanwhile, Twitter said it removed bots emphasizing fake news about race and civil rights.

The twin reports followed a University of Wisconsin-Madison study that found Russia-linked social media accounts were posting about the same flashpoint topics — race relations, gun laws, and immigration — as they did in 2016. “For normal users, it is too subtle to discern the differences,” Kim told the Wisconsin State Journal in an interview earlier this year. “By mimicking domestic actors, with similar logos (and) similar names, they are trying to avoid verification.”

Mixed findings

Despite the acknowledged proliferation of bots ahead of the 2020 U.S. elections, their reach is a subject of debate.

First is the challenge of defining “bots.” Some define them as strictly automated accounts (like news aggregators), while others include software like Hootsuite and cyborg bots. These differences of opinion manifest in bot-analyzing tools like SparkToro’s Fake Followers Audit tool, Botcheck.me, Bot Sentinel, and NortonLifeLock’s BotSight, which each rely on different detection criteria.

In a statement provided to Wired about a bot-identifying service developed by University of Iowa researchers, Twitter disputed the notion that third-party services without access to its internal datasets can accurately detect bot activity. “Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we take to enforce our developer policies,” a spokesperson said.

USC study lead author Emilio Ferrara generally agrees with the University of Iowa researchers’ findings. “As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content,” he said. “We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 U.S. elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”

To be clear, there’s no silver bullet. As soon as Twitter and Facebook purge bots from their rolls, new ones crop up to take their place. And even if fewer make it through automated filters than before, false news tends to spread quickly without much help from the bots sharing it. Vigilance — and more experiments in the vein of Twitter’s sharing prompt — could be partial antidotes. But with the clock ticking down to Election Day, it’s safe to suggest that any efforts will be wildly insufficient.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics