When the 17-minute live video, filmed by attacker Brenton Tarrant as he walked into a mosque in Christchurch and shot dozens of people, appeared on Facebook, it took 29 minutes after its start for a user to report it.
Addressing the widespread concern over how the social media giant plans to prevent extremist content from online circulation, Guy Rosen, Facebook's vice president of integrity said that Facebook will work on enhancing AI algorythms that proactively detect malicious content, but warned that AI is "not perfect."
Despite trained to detect content, such as terrorist propaganda and graphic violence, Facebook's AI systems failed to detect the Christchurch mosque shooting footage due to a number of reasons, outlined by Rosen.
"To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content — for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground," the VP said.
In the first 24 hours after the horrific attack, which saw 50 people murdered, Facebook reported to have removed more than 1.2 million videos of the carnage and 300,000 additional copies after they were posted.
"In total, we found and blocked over 800 visually-distinct variants of the video that were circulating. This is different from official terrorist propaganda from organizations such as ISIS [Daesh] — which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals," Rosen said.
Interestingly, no users reported the video during the live broadcast and Facebook received a complaint only 12 minutes after the video ended. In an attempt to explain users' behaviour and alarm triggers, Rosen argued that Facebook may have not accounted for more accurate and specific reasons users could list in their reports.
"In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review," he said.
Some took his words to mean that in addition to existing categories — such as "nudity," "hate speech," "spam," "harassment," "violence," "unauthorized sales," "suicide or self-injury," "gross content" and "other" — Facebook will introduce new tags, like "murder" or "terrorism."
In latest statement on the Christchurch terror attack, Facebook says the Live video wasn't acted on straight away because it wasn't flagged under a new category "suicide"….— Mark Di Stefano 🤙🏻 (@MarkDiStef) March 21, 2019
Looks like they're mulling a new category. Like "murder" or "terrorism"? https://t.co/dMq1QgoCyu pic.twitter.com/KtPfd5cfiL
Overall, the video was shared by users for a variety of reasons, said Rosen.
"Some intended to promote the killer's actions, others were curious, and others actually intended to highlight and denounce the violence. Distribution was further propelled by broad reporting of the existence of a video, which may have prompted people to seek it out and to then share it further with their friends."
Following the attack on 15 March, the New Zealand Police urged those affected by the circulating footage of the attack to seek appropriate help.
Police is aware there are distressing materials related to this event circulating widely online. We would urge anyone who has been affected by seeing these materials to seek appropriate support.— New Zealand Police (@nzpolice) March 15, 2019
New Zealand Prime Minister Jacinda Ardern has been in contact with Facebook, demanding the social media giant ensures the "horrendous" footage of the attack was unable to be viewed.
"You can't have something so graphic and it not [have an impact]… and that's why it's so important it's removed," the PM said.
In its further steps towards tackling extremist content, Facebook promised to improve its "matching technology so that we can stop the spread of viral videos of this nature, regardless of how they were originally produced."