You can make a difference.
Global tech giants have pledged to clamp down on terrorist and extremist content on their platforms, again.
On Wednesday afternoon in Paris, Facebook, Microsoft, Twitter, Google and Amazon signed up to the Christchurch Call to Action, a proposal spearheaded by New Zealand Prime Minister Jacinda Ardern after the livestreamed shooting that killed 51 worshippers praying in two mosques in March.
The video of the atrocity proliferated online; more that 1.5 million copies had to be subsequently removed from Facebook.
In a joint statement issued on Wednesday, the technology companies said: “The terrorist attacks in Christchurch, New Zealand in March were a horrifying tragedy. And so it is right that we come together, resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence."
The companies representatives were joined in Paris by world leaders attending the summit seeking to address online hate and its violent manifestations.
Both Paul Joseph Watson and Infowars, the controversial site he edits which is a major hub for conspiracy theories, disinformation, and Islamophobic content, were banned from Facebook earlier this month, but Watson was circumventing the ban by repackaging his content under a different brand name and posting it on the platform.
On Tuesday, Facebook tightened its rules on its live video service, banning users for a period of time after the first violation of company policy anywhere on the platform.
The world's biggest technology companies have promised to work harder to stamp out hate speech on their platforms at a summit in Paris called after a massacre at two mosques in New Zealand was livestreamed and spread widely online.
When questioned why these users wouldn’t be banned from Facebook altogether, a spokesperson from the company told National Observer that accounts that break community standards were already liable to account restrictions of a varying scale.
The companies have also committed to continuing to invest in technology, for example digital fingerprinting and artificial intelligence-based technology that can detect violent and extremist online content. They have also committed to publishing regular reports on their efforts to detect and remove abusive and violent content.
Facebook said in May last year that it struggled to stamp out hate speech because the computer algorithms it uses to track it down still require human assistance to judge context.
The technology heavyweights also said in their joint statement that they will work together to develop technology and build crisis protocols for urgent events, work together to educate the public on how to report online hate, and collectively support research that looks at the impact of online hate.
Where were Mark and Sheryl?
Critics noted that Facebook's most senior operatives, founder and CEO Mark Zuckerberg and chief operating officer Sheryl Sandberg, were absent from talks on Wednesday. The company’s chief lobbyist, Nick Clegg, attended.
When questioned by National Observer about why Zuckerberg and Sandberg would be absent, a spokesperson for Facebook said Clegg was the best person to lead on the issue, given his experience working in government and with industry bodies on countering violent extremism.
Nick Clegg was deputy prime minister in the United Kingdom's 2010 to 2015 Conservative-Liberal Democrat coalition government.
“We share the commitment of world leaders to curb the spread of terrorism and extremism online," he said in a statement. "In my time in government, I witnessed first-hand the tragic impact of terrorism and violent extremism on our communities. These are complex issues and we are committed to working with world leaders, governments, industry and safety experts at next week’s meeting and beyond on a clear framework of rules to help keep people safe from harm.”