Saturday, May 21, 2022

Facebook News Feed bug falsely elevates misinformation, Russian state media

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

A group of Facebook engineers identified a “massive ranking error” that exposed as many as half of all news feed views over the past six months to potential “integrity risks,” according to an internal report on the incident obtained by The edge

The engineers first noticed the problem last October, when a sudden wave of misinformation began to flood the news feed, notes the report, shared within the company last week. Rather than suppress reports of repeated misinformation offenders being reviewed by the company’s network of third-party fact-checkers, the news feed instead spread the messages, increasing views globally by as much as 30 percent. Unable to find the cause, the technicians saw the surge wane a few weeks later, then flare up repeatedly until the ranking problem was resolved on March 11.

In addition to posts flagged by fact-checkers, the internal investigation found that during the bug period, Facebook’s systems failed to detect likely nudity, violence and even Russian state media recently promised the social network to stop recommending in response to Ukraine’s invasion of the country. The issue was internally labeled as SEV level one, or site event — a label reserved for high-priority technical crises such as the ongoing Facebook and Instagram blocking in Russia.

Meta spokesperson Joe Osborne confirmed the incident in a statement to: The edge, and said the company “discovered inconsistencies in the cut on five separate occasions, which correlated with small, temporary increases in internal statistics.” The internal documents stated that the technical issue was first introduced in 2019 but did not have a noticeable impact until October 2021. had any meaningful, long-term impact on our statistics” and did not apply to content that met the system’s takedown threshold.

For years, Facebook has touted downranking as a way to improve news feed quality and has steadily expanded the types of content the automated system responds to. Downranking has been used in response to wars and controversial political narratives, raising concerns about shadow bans and calls for legislation. Despite its increasing importance, Facebook has yet to open up about its impact on what people see and, as this incident shows, what happens when the system goes wrong.

In 2018, CEO Mark Zuckerberg explained that downranking fights the impulse people have to naturally engage with “more sensational and provocative” content. “Our research suggests that no matter where we draw the line for what’s allowed, if a piece of content gets close to that line, people will, on average, be more engaged with it — even if they tell us afterwards that they don’t like the content. find ,” he wrote in a Facebook post then.

Downranking doesn’t just suppress what Facebook calls “borderline” content which comes close to breaking its rules, as well as content that its AI systems suspect violates, but needs further human assessment. The company released a high-level list of what it’s degrading last September, but hasn’t backed down on exactly how demotion affects the distribution of affected content. Officials have told me they hope to shed more light on how demotion works, but are concerned it would help opponents abuse the system.

Meanwhile, Facebook leaders regularly brag about how their AI systems are getting better every year in proactively detecting content such as hate speech, placing greater importance on the technology as a means of moderation at scale. Last year, Facebook said it would lower all political content in its news feed — part of CEO Mark Zuckerberg’s drive to return the Facebook app to its more light-hearted roots.

I’ve seen no indication that there was malicious intent behind this recent ranking flaw that affected up to half of News Feed views over a period of months, and thankfully it didn’t break Facebook’s other moderation tools. But the incident shows why more transparency is needed in internet platforms and the algorithms they use, according to Sahar Massachi, a former member of Facebook’s Civic Integrity team.

“In a large complex system like this, bugs are inevitable and understandable,” said Massachi, who is now a co-founder of the nonprofit. Integrity Institutetold The edge† “But what happens when a powerful social platform has one of these accidental flaws? How would we even know? We need real transparency to build a sustainable accountability system so we can help them resolve these issues quickly.”

Clarification at 6:56 pm ET: Specified with confirmation from Facebook that accounts identified as violators of repeated misinformation saw their opinion rise by as much as 30%, and that the bug did not affect the company’s ability to remove content who explicitly broke the rules.

Correction at 7:25 PM ET: Story updated to note that “SEV” stands for “site event” and not “serious technical vulnerability”, and that level one is not the worst crisis level. There is a level zero SEV that is used for the most dramatic emergencies, such as a global outage. We regret the mistake.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article