17:53 GMT +321 October 2019
Listen Live
    YouTube Logo

    ‘Unbelievable': YouTube Stream on US Anti-Hate Hearing Packed with Racist Speech

    CC0 / Pixabay
    Opinion
    Get short URL
    130
    Subscribe

    YouTube's decision to shut down the comments section on its Tuesday livestream of the House Judiciary Committee's hearing on white nationalism highlights the difficulties of moderating hate speech online, technologist Chris Garaffa told Sputnik.

    "It was unbelievable to be watching this livestream," Garaffa told Radio Sputnik's By Any Means Necessary on Wednesday. "There was just racist, anti-Semitic, straight up neo-Nazi-style comments on this video of a hearing that's supposedly talking about hate speech."

    ​The hearing, "Hate Crimes and the Rise of White Nationalism," was livestreamed on the House Judiciary Committee's YouTube channel. Various individuals, including Turning Point USA's Candace Owens and Mort Klein, president of the Zionist Organization of America, attended the event.

    However, as the hearing kicked off, the live chat accompanying the stream began accumulating inflammatory remarks from users, ultimately forcing the video giant to disable it. "Hate speech has no place on YouTube," a statement from the company reads.

    Garaffa told hosts Eugene Puryear and Sean Blackmon that the move underscores "how hard it is to moderate this kind of stuff online."

    "There's thousands of comments being posted on videos every second. You can't have people monitor them, and when you get into AI [artificial intelligence] trying to monitor that, you get into so many false positives and false hits that it's just not effective," he stressed.

    Indeed, using AI technology to moderate comments or videos on any social platform is a troublesome task. It was recently reported by the Daily Beast that Facebook's counterterror algorithms failed to remove video footage of the Christchurch shootings from its site because there wasn't "enough gore" to trigger the system.

    According to Facebook, while its AI system has been trained to detect videos featuring nudity, terrorist propaganda and various degrees of graphic violence, it hasn't yet been taught to recognize and remove scenes similar to those shown in the deadly New Zealand mosque attacks.

    "This particular video did not trigger our automatic detection systems," reads Facebook's statement. "To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.

    "

    The livestream footage from the New Zealand attacks remained on Facebook for about an hour before the country's law enforcement officials called on Facebook to remove it. According to the Washington Post, the video was uploaded onto the platform more than 1.5 million times.

    Garaffa told Puryear that at the end of the day, it's not what the internet giants such as Facebook, Twitter or Google — which owns YouTube — can do to remedy the surge in online hate speech, but rather what individuals can do themselves by actively demonstrating and using their voices to counter the negative speech.

    "That's how we're going to get rid of these racist, hateful viewpoints — in the streets, not on Facebook, and not by allowing Facebook to do it for us," he said.

    The views and opinions expressed in the article do not necessarily reflect those of Sputnik.

    Related:

    Internet Regulation Dilemma: 'Hate Speech Is Not Freedom of Speech' - EU Analyst
    Norway Rules 'F**k Jews' Remark by Muslim Rapper is Not Hate Speech
    Twitter Not Banning Infowars: ‘US Laws Don't Define Hate Speech Clearly' - Prof
    Xenophobia, Hate Speech Widespread in EU Amid Migration Crisis - Report
    Swedish Journalist Probed for 'Hate Speech' Over Sharia-Mocking Cartoons
    Tags:
    Artificial Intelligence (AI), Social media, Hate Speech, Twitter, Facebook, YouTube, United States
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik