- Sputnik International, 1920
World
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

Facebook Admits 'Unacceptable Error' As AI-Generated ‘Keep Seeing Primates’ Prompt Targets Black Men

© REUTERS / Johanna Geron The Facebook logo is displayed on a mobile phone in this picture illustration taken 2 December 2019.
 The Facebook logo is displayed on a mobile phone in this picture illustration taken 2 December 2019. - Sputnik International, 1920, 04.09.2021
Subscribe
Tech giants’ algorithms have previously been blamed for embarrassing errors. In 2015, Google's AI reportedly tagged two Black people's faces with the word "gorilla". The company apologised and promptly censored the words “gorilla”, “chimp”, “chimpanzee”, and “monkey” from Google Lens, “blinding” the algorithm.
A contrite Facebook rushed to issue an apology on Friday after its AI software generated “keep seeing” prompts that labeled videos showing black men with the term "primates".
A Facebook spokesperson told The New York Times, which first reported the story, that it was a "clearly unacceptable error" of its auto-generated recommendation system.
Darci Groves, a former content design manager at Facebook, said a friend sent her a screenshot of the video in question with the company’s auto-generated prompt. The video, dated 27 June 2020, was posted by UK tabloid the Daily Mail.
It contained clips of two separate incidents, which appear to take place in the US. One shows a group of black men arguing with a white individual on a road in Connecticut, while the other shows several black men arguing with white police officers in Indiana before being detained.
Facebook users watching the video had received an automated prompt asking if they would like to "keep seeing videos about Primates".
Darci Groves shared the screenshot on Twitter, posting it to a product feedback forum for current and former Facebook employees. Groves slammed the prompt as "horrifying and egregious".
​In response, a Facebook product manager said the company was "looking into the root cause". The company later said the recommendation software involved had been disabled.
"We disabled the entire topic recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again," a spokesperson was cited by The New York Times as saying.
This comes as technology companies, including Twitter and Google, have come under fire for perceived biases displayed by their artificial intelligence software.
Last year, Twitter investigated whether its automatic image cropper may be racially biased against black people as it selected which part of a picture to preview in tweets.
​In 2015, Google's algorithm reportedly tagged two Black people's faces with the word "gorilla", prompting the company to say it was "genuinely sorry that this happened," in a statement to The Wall Street Journal.
On this occasion, Twitter users were split in their response to the AI-generated Facebook prompts. Some marvelled at how the platforms were still failing to address the issue.
​Some saw nothing wrong with the labelling that triggered such outrage.
​Others slammed “whites coding and training the AI to be as racist”.
Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала