“A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platform,” Zuckerberg said in a Friday Facebook post.
“We will soon start labeling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case,” he noted. “We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society - but we'll add a prompt to tell people that the content they're sharing may violate our policies.”
The policy changes come after Facebook received significant flak for its failure to remove a June 1 post by US President Donald Trump which warned that “when the looting starts, the shooting starts,” in reference to mass protests in Minneapolis, Minnesota, after the National Guard arrived to restore order. Twitter quickly flagged a tweet with the identical message as glorifying violence, but Facebook dragged its feet for days.
"Personally, I have a visceral negative reaction to this kind of divisive and inflammatory rhetoric. I disagree strongly with how the president spoke about this, but I believe people should be able to see this for themselves, because ultimately accountability for those in positions of power can only happen when their speech is scrutinized out in the open,” Zuckerberg said at the time.
Just days earlier, the Facebook chief said he didn’t “think that Facebook or internet platforms in general should be arbiters of truth,” after Twitter flagged a different post of Trump’s that contained false information about mail-in voting ballot fraud.
More recently, on June 20, Facebook removed a widely shared video posted by Trump that contained edited footage purporting to be from CNN’s reporting on a story about toddlers of different races interacting. However, the footage was heavily edited from what CNN had aired, including a fake CNN chyron. Facebook flagged the video as a copyright violation. Twitter’s takedown of the same video posted on its platform, however, called it “manipulated media.”
Zuckerberg also noted new additions to the list of prohibited ad content on Friday.
“[T]oday we're prohibiting a wider category of hateful content in ads. Specifically, we're expanding our ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others,” Zuckerberg said. “We're also expanding our policies to better protect immigrants, migrants, refugees and asylum seekers from ads suggesting these groups are inferior or expressing contempt, dismissal or disgust directed at them.”
The change comes after Facebook took down 88 ads by Trump’s reelection campaign on June 18 that featured a warning about the dangers posed by the anti-fascist protest group Antifa and prominently contained an inverted red triangle, the symbol used to demarcate political prisoners and dissidents kept in Nazi concentration camps.
The Facebook CEO also announced on Friday the company was creating a Voting Information Center to provide users information for registering to vote, noting it would also be looking for attempts at voter suppression by posters spreading unverified claims that a city or district had become a COVID-19 hotspot on election days or false claims that Immigration and Customs Enforcement (ICE) agents are patrolling near polling stations. Misinformation about where and how to vote will also merit punishment.