Super-Efficient? Facebook's AI Technology to Scrap Hate Speech Doesn't Work, Report Says

© REUTERS / Regis DuvignauFILE PHOTO: The Facebook logo is displayed on their website in an illustration photo taken in Bordeaux, France, on February 1, 2017
FILE PHOTO: The Facebook logo is displayed on their website in an illustration photo taken in Bordeaux, France, on February 1, 2017 - Sputnik International, 1920, 18.10.2021
Subscribe
Facebook has repeatedly claimed that most of the hate speech and violent content on the platform is removed by the company's "super-efficient" AI before users even see it.
Facebook's artificial intelligence (AI) technology to identify and remove posts containing hate speech and violence actually does not work, according to internal company documents seen by The Wall Street Journal (WSJ).

The newspaper argued in a report that the documents include a mid-2019 note, in which a Facebook senior engineer said that the problem is that "we [the company] do not and possibly never will have a model that captures even a majority of integrity harms, particularly in sensitive areas".

The engineer estimated that Facebook's automated systems scrapped posts that generated merely 2% of the hate speech views that violated its rules.

"Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term", he wrote.

The claims echoed those by another team of Facebook employees who previously argued that AI systems were removing posts that generated 3% to 5% of the views of hate speech on the platform, and 0.6% of all content that violated Facebook's policies against violence and incitement.
In 2020, Facebook CEO Zuckerberg expressed confidence that the platform's AI would be able to take down "the vast majority of problematic content". He spoke as the social networking giant claimed that most hate speech is taken down from the platform before users even see it.
According to Facebook's recent report, the hate speech detection rate currently stands at 97%.

Another Facebook Whistleblower Ready to Testify in Congress

As for the WSJ report, it comes after former Facebook data scientist Sophie Zhang told CNN last week that she is ready to testify against her former employer before Congress.
The woman was fired from Facebook in August 2020, after she posted a 7,800-word memo, in which Zhang detailed how the company allegedly failed to do enough to tackle hate and misinformation, especially in developing countries. In the memo, Zhang wrote: "I have blood on my hands", insisting that she was officially being fired from Facebook over "poor performance".
A journalist makes a video of the Instagram logo - Sputnik International, 1920, 09.09.2020
Facebook Creates Instagram ‘Equity Team' Amid Offensive Against Hate Speech
Her CNN interview followed congressional testimony by another Facebook whistleblower, Frances Haugen, who argued that the company knew it had inflicted harm on the mental health of teenagers, but didn't do much to stop content promoting "hate and division", as well as content that created a toxic environment for teenage girls.
The social network claimed Haugen's accusations "don't make sense", with Zuckerberg stressing the company cares "deeply" about users' safety-related issues.
Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала