Skip to main content

Non-human users threaten to hijack Canada's Twitter discussions

#18 of 84 articles from the Special Report: Democracy and Integrity Reporting Project
Photo of John Gray by Stephanie Wood

Help us raise $150,000 by December 31

Goal: $150k
$15k

Nearly a third of accounts tweeting about Canadian politics exhibit signs of inauthentic activity — and that figure may just be the tip of the iceberg, says John Gray, co-founder of the social-media analytics firm Mentionmapp.

Gray is among the only social-media researchers in Canada who has been tracking how bots and other inauthentic Twitter activity are influencing the country’s online political discourse. He started studying these patterns around the 2017 provincial election in British Columbia and has conducted ongoing analyses ever since.

Looking at popular political hashtags like #abpoli, #ableg, #bcpoli and #cdnpoli, about 25 to 30 per cent of accounts engaging in Canada’s political conversation over the past two years have shown signs of suspicious activity, Gray said.

Gray uses a benchmark of 72 tweets per day, or one tweet every 10 minutes for 12 hours every day, as an indicator of suspicious activity. Although there is no definitive cut-off point for determining inauthentic or non-human activity, the 72-tweet threshold is a commonly used marker.

Some of these accounts may be humans tweeting at an extremely high rate, but many are likely bots or humans amplified by bots and other inauthentic or automated activity, Gray said. The most active accounts Gray has tracked tweet an average of 400 or more times a day, seven days a week.

Looking at popular political hashtags like #abpoli, #ableg, #bcpoli and #cdnpoli, about 25 to 30 per cent of accounts engaging in Canada’s political conversation over the past two years exhibit signs of non-human activity.

Gray refers to this activity as “synthetic,” meaning that it’s “something fake, designed to look real.”

He has tracked this type of activity in online conversations surrounding issues such as vaccines and the Kinder Morgan pipeline debate, and has also documented how “fake news” sites use bots to spread their content.

Given the volume of synthetic activity present in Canada’s political conversations on Twitter, there’s reason to be concerned that this activity may be manipulating public discourse, Gray said. He also noted that this type of activity could influence Google’s search algorithm, which pulls from social-media platforms like Twitter.

“If you’re gaming Twitter, you’re also gaming Google,” he said.

Furthermore, Gray is worried that the activity he’s detecting may be “the canary in the coal mine” when it comes to social-media manipulation.

For one thing, Twitter accounts using more subtle tactics but in larger numbers could have the same effect in terms of amplifying certain narratives and skewing online conversations — but since they’re not as overtly inauthentic, they may be slipping under the radar. One hundred accounts tweeting the same hashtag once would have just as much or more influence as one account tweeting the same hashtag 100 times, but only the latter would be flagged as suspicious.

Another concern is that bots and other inauthentic activity are almost certainly influencing the way information is shared on other social-media sites, Gray said, but most platforms don’t publicly release the data needed to detect it.

With federal elections approaching, Gray says he expects to see foreign and domestic actors using social media to push divisive and extreme narratives surrounding issues like immigration, guns and climate change. He's urging Canadians to look at what's happening globally and recognize the gravity of the situation so they don’t become easy prey for online manipulation.

“We’re being attacked,” Gray warned. “Our brains are being hacked.”

Comments