Facebook says it wants to prevent "bad actors" from hacking Canada's democracy. But after announcing some new steps on Thursday to tackle the problem, both the social media giant and the federal government admit much more needs to be done to safeguard the 2019 election.
“The digital age has provided malicious actors with more ways than ever before to pursue their objectives in a rapid and constantly evolving manner,” said Minister of Democratic Institutions Karina Gould, at a business luncheon in Ottawa hosted by the Economic Club of Canada to unveil the Facebook announcement.
“In that respect, I think it is important for social media platforms to think critically about their current practices and how they can create spaces for informed dialogue and the information we consume.”
Fight disinformation with facts. Support the Election Integrity Reporting Project!
Kevin Chan, head of public policy at Facebook Canada, said that Thursday's announcement is only the latest in a series of measures, with more to come.
“Just in the past six months, Facebook has taken a lot of other important steps," he said at the event, held in a downtown Ottawa hotel. "For one, we’ve made it a lot more difficult for bad actors to spread inauthentic information with the use of machine learning and other means of enforcing our community standards and ad policy.”
Facebook describes "machine learning" as the technology it uses to determine what people see in their newsfeeds when they log in to the site.
It says that its researchers and engineers develop "machine learning algorithms" that rank feeds, ads and search results, while creating new algorithms that are supposed to keep spam and misleading content at bay.
Facebook fix comes as firm under fire
Facebook introduced its new plan at a time when American internet giants are under fire over revelations that their algorithms failed to stop malicious online interference that pushed fake news reports to spread rapidly on their platforms in the last U.S. presidential election.
The latest Facebook Canada measures include a new email hotline for politicians to use if they are hacked, as well as new efforts to educate and warn users about false advertising. The company also released what it called a "Cyber Hygiene Guide" as part of its plan. The guide provides tips for politicians and political parties about how to prevent their pages from being compromised in a network that has over two billion users, according to its founder Mark Zuckerberg.
Facebook admitted last month that it sold about 3,000 ads that were connected to hundreds of misleading accounts or pages between June 2015 and May 2017. These added up to about $100,000 in ad revenues, said Facebook's chief security officer Alex Stamos.
"Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia," Stamos wrote in a Sept. 6 statement.
Facebook has shared its findings with congressional investigators and says that it continues to cooperate with the U.S. government on the ongoing investigation.
The firm's new Canadian plan is part of a response to a report by Canada's electronic spy agency, the Communications Security Establishment (CSE), released in June. At that time, the CSE warned that online campaigns to unduly influence Canadian electoral politics were on the rise, and more needed to be done to counter this threat before the next federal election in 2019.
'Work in progress' says Facebook's public policy head
Chan said Facebook would also deploy new artificial intelligence research to make it more difficult for "bad actors" to use its network to spread misinformation.
“All of this is a work in progress,” said Chan.
He also explained Facebook would use other efforts to enforce its community standards, but added that Facebook wouldn't actively try to censor people.
“We never want to be arbiters of the truth,” Chan said.
Gould added that it was important to promote a discussion that ensures people are aware of the problem and its potential impact on democracy. But she also stopped short of calling for officials to start policing fake news.
“I think that...the closer we get to determining what’s good and bad information, the less democratic we become,” she said.
The initiative also includes efforts to increase news literacy in Canada through a new two-year partnership between Facebook and MediaSmarts, an Ottawa-based charity that promotes critical thinking and encourages people to be engaged consumers of digital information.
The new joint project, called Reality Check, aims to teach Canadians how to spot misinformation and false news online, and promotes the idea that verifying information is an essential life and citizenship skill. It will include lesson plans, interactive online missions, videos and guides.
“I think we’ve seen very clearly in the last couple years how essential those classic critical thinking skills have become, not just to all of us individually, but to all of us as a democracy,” said Matthew Johnson, director of education at MediaSmarts. “In many ways, we can’t rely on other people to be gatekeepers anymore.”
Gould also noted that social media platforms have an important responsibility since the population is now turning to them for information.
"Just like governments and private corporations have a public responsibility to contribute to a healthy democracy, social media platforms must begin to view themselves as actors in shaping the democratic discourse and protecting our democracy from those who would seek to harm it,” Gould said.
“A democracy is only as strong as the citizens that make it up...a well disguised fake news or disinformation campaign can erode the public’s faith in the reliability of traditional media sources. It can distort the public’s understanding of major issues.”
Facebook plans to hire 4,000 people to review content
In August, Facebook announced those who repeatedly share false news would be prohibited from advertising on the platform. The company said it would identify offenders after credible third party fact-checking organizations flagged content as false.
Earlier this month, Facebook hired 1,000 people to manually review ads targeted to users based on criteria such as political views or race. Ads will be now more transparent, as well — users will now be able to clearly identify an ad’s sponsor as well as see what other ads that advertiser is running, regardless of target audience.
But in recent days, Facebook and Google have faced more criticism following a Bloomberg report that they had helped an anti-refugee group target ads in swing states, prior to the 2016 presidential election that sent Donald Trump to the White House.
Meantime, Chan explained that the company plans to hire 4,000 people globally to manually review content on the platform on top of its investments in artificial intelligence. Some of the artificial intelligence research would be done at Facebook’s new AI research lab in Montreal, Chan said.
Anatoliy Gruzd, Canada Research Chair for Social Media Data Stewardship and Associate Professor at Ryerson University, who was on hand for the announcement, also said he expected more measures to follow.
“I was delighted to hear that the minister said this is just the starting point in this conversation," he said, adding that he looks forward to continuing the discussion.
—With files from Celeste Côté