The ban on false and misleading information, announced by Facebook executives on Monday, comes six weeks after Sen. Ron Wyden (D-OR) asked the company’s Chief Operating Officer Sheryl Sandberg how Facebook would counter tactics which aim to mislead users and potentially prevent them from voting, Reuters reported.
Facebook executives explained that the new policies would be enforced by community standards moderators. Company officials said that such moderation would be more effective than deleting all misleading posts at once.
“We don't believe we should remove things from Facebook that are shared by authentic people if they don't violate those community standards, even if they are false,” said Tessa Lyons, product manager for Facebook's News Feed feature, which shows users what their friends are sharing.
Facebook has previously taken fire for the alleged Russian involvement in the 2016 US presidential election via social media posts. Although little evidence has been found suggesting that such posts had much influence on the election, Facebook has been criticized for not taking action against dishonest accounts and misleading information on its platform. Since then, Facebook policies have undergone several changes.
“Without a clear and transparent policy to curb the deliberate spread of false information that applies across platforms, we will continue to be vulnerable,” said Graham Brookie, head of the Atlantic Council’s Digital Forensic Research Lab.
Other social media companies, including Reddit and Twitter, have also launched their own efforts to keep misinformation off their platforms.
In the beginning of October Twitter announced that “as platform manipulation tactics continue to evolve, we are updating and expanding our rules to better reflect how we identify fake accounts, and what types of inauthentic activity violate our guidelines,” noting that it would continue removing accounts with false information about voting or ones misrepresenting themselves as members of political parties.