Some news articles shared on the social media platform will now feature an "I" icon, standing for additional information. The box will tell netizens more about the publisher, will show where in the world the article has gone viral and will offer more publications by the same media outlet for reading.
Separately, it will show if any of your friends have already shared the story. While the first two features were tested last year, the latter two are presented as the newest ones.
The newly unveiled features, however, seem to have a couple of issues. Firstly, the information about media outlets is borrowed from Wikipedia, which, though it is commonly considered as an authoritative website, is still crowdsourced, which does not guarantee immunity from political, or other, biases. Moreover, the reliance on this source is dubbed "humorous" on Twitter, with many people feeling deeply skeptical about the move at large:
Apparently there is a lot of talk of people thatare fed up with Facebook.I believe thetime has come to delete that garbage from our lives.Stealing our private information and selling it.If you are readyto takethat step please like and RT. Let's see if the Fake News reports on it!— Brett Reed🇺🇸 #OHIO STATE (@BReedohio) 5 апреля 2018 г.
I have an idea, @facebook! Seriously. Hire out-of-work, experienced journalists to seek out and moderate fake news. They are highly skilled and need work, and you'd be heroes. It would be really good for everyone all around. https://t.co/jC4LMD4y6k— Brooke Binkowski (@brooklynmarie) 4 апреля 2018 г.
Secondly, the feature is for the time being applied inconsistently, with some news stories featuring no information about their publisher whatsoever. For instance, video posts, which are often a prime platform for generating fake news, as proven during the Las Vegas shooting in 2017, do not feature the icon at all. Neither do many right-wing conspiracy sites, like Gateway Pundit, although both of these sites are covered by Wikipedia.
Facebook has, however, promised to address the inconsistencies and make the features universally applied.
In response, many Twitter users started sharing a video proving that it takes next to no time and effort to create a fake news page on Facebook:
How easy is it to spread fake news on Facebook? Let this online marketer explain in 4 steps. pic.twitter.com/l1UHOIwAfy— Al Jazeera English (@AJEnglish) 4 апреля 2018 г.
Just ran into this on Facebook and it struck me as yet another demonstration of how easily fake news (even useless ones like this invented quote) can be perpetuated nowadays, AND without even making an effort. Poor editing, shitty font and still, somehow, 7000 people buy it. How? pic.twitter.com/6c58Stgn3R— Federico (@FedericoManasse) 5 апреля 2018 г.
One Twitter user sorrowfully remarked that it’s the users who are ultimately responsible for sharing fake news:
Sadly, it is OUR responsibility to spot the fake news BEFORE we share it and become a part of the problem. https://t.co/XA6ryd3JIE— Steve Dotto (@dottotech) 5 апреля 2018 г.
In a separate move, Facebook said they had removed a feature which allows users to enter a phone number or email address to find others.
That was being used by rogues to scrape public profile information, Facebook said, apparently with reference to the Cambridge Analytica case, in which, as it was originally thought data on roughly 50 million Facebook users was improperly handled.
Facebook now revealed, though, that as many as 87 million users, most of them in the US, may have had their information illegally obtained and misused by the data mining firm Cambridge Analytica. The revelation essentially signals that nearly twice as many Facebook users may have been directly affected by the unauthorized sale of the social network's user data to the third-party company, which was contracted by the Trump team to assit with election ad targeting.
Separately, Facebook has for a long while been at the center of the fake news scandal, all the way since the American 2016 presidential election. The measures earlier introduced by the company’s management included marking dodgy articles with red flags, as well as inserting a "related articles" section, which was meant to ultimately provide readers with related sources of information on the same topic.
Also, Facebook tried prioritizing negative comments, which expressed disbelief in the news, but the move led to the wrong labeling of news sources, as many reliable ones were slammed as fake.