‘Unbelievable': YouTube Stream on US Anti-Hate Hearing Packed with Racist Speech

CC0 / Pixabay / YouTube Logo
YouTube Logo - Sputnik International
Subscribe
YouTube's decision to shut down the comments section on its Tuesday livestream of the House Judiciary Committee's hearing on white nationalism highlights the difficulties of moderating hate speech online, technologist Chris Garaffa told Sputnik.

"It was unbelievable to be watching this livestream," Garaffa told Radio Sputnik's By Any Means Necessary on Wednesday. "There was just racist, anti-Semitic, straight up neo-Nazi-style comments on this video of a hearing that's supposedly talking about hate speech."

​The hearing, "Hate Crimes and the Rise of White Nationalism," was livestreamed on the House Judiciary Committee's YouTube channel. Various individuals, including Turning Point USA's Candace Owens and Mort Klein, president of the Zionist Organization of America, attended the event.

https://www.spreaker.com/user/radiosputnik/black-lists-hate-speech-and-the-destruct

However, as the hearing kicked off, the live chat accompanying the stream began accumulating inflammatory remarks from users, ultimately forcing the video giant to disable it. "Hate speech has no place on YouTube," a statement from the company reads.

Candace Owens during House Judiciary Committee hearing - Sputnik International
Candace Owens Slams US Lawmaker for Twisting Her Riff on Nationalism (VIDEOS)

Garaffa told hosts Eugene Puryear and Sean Blackmon that the move underscores "how hard it is to moderate this kind of stuff online."

"There's thousands of comments being posted on videos every second. You can't have people monitor them, and when you get into AI [artificial intelligence] trying to monitor that, you get into so many false positives and false hits that it's just not effective," he stressed.

Indeed, using AI technology to moderate comments or videos on any social platform is a troublesome task. It was recently reported by the Daily Beast that Facebook's counterterror algorithms failed to remove video footage of the Christchurch shootings from its site because there wasn't "enough gore" to trigger the system.

According to Facebook, while its AI system has been trained to detect videos featuring nudity, terrorist propaganda and various degrees of graphic violence, it hasn't yet been taught to recognize and remove scenes similar to those shown in the deadly New Zealand mosque attacks.

"This particular video did not trigger our automatic detection systems," reads Facebook's statement. "To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.

In this March 17, 2019, file photo, a police officer stands guard in front of the Al Noor mosque in Christchurch, New Zealand - Sputnik International
Asia
New Zealand Parlt Votes to Amend Gun Laws Following Deadly Christchurch Massacre
"

The livestream footage from the New Zealand attacks remained on Facebook for about an hour before the country's law enforcement officials called on Facebook to remove it. According to the Washington Post, the video was uploaded onto the platform more than 1.5 million times.

Garaffa told Puryear that at the end of the day, it's not what the internet giants such as Facebook, Twitter or Google — which owns YouTube — can do to remedy the surge in online hate speech, but rather what individuals can do themselves by actively demonstrating and using their voices to counter the negative speech.

"That's how we're going to get rid of these racist, hateful viewpoints — in the streets, not on Facebook, and not by allowing Facebook to do it for us," he said.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала