Facebook’s new tool to stop fake news is a game changer—if

graceacupuncture - 31/12/2022 - NEWS - 585 Views

When an explosive—and most likely fake—story about Joe Biden’s son began to circulate online this week, Facebook did something unusual: It decided to restrict its spread while it investigated the story’s accuracy.

© Image by Fortune Facebook's new tool to stop fake news.

This marked the first prominent deployment of a tool the company has been testing for several months. Facebook calls the tool a “viral content review system,” while some news outlets and research outfits have referred to it as a “circuit breaker.” Whatever its name, the tool has enormous potential to limit a tsunami of false or misleading news on topics like politics and health.

Load Error

The circuit breaker tactic is a common sense way for the social network to fix its fake news problem, but it may also run counter to Facebook’s business interest. This means that it’s too soon to say whether Facebook’s actions on the Biden post will be a one-off occurrence or a new embrace of civic accountability by a company that has long resisted it.

The promise of viral circuit breakers

Not every post on Facebook is treated equally, as most people are aware. Instead, the site’s algorithm amplifies the reach of those most likely to elicit a reaction. That’s why a picture of a new baby from a long-ago acquaintance will vault to the top of your Facebook feed, even if you haven’t seen any other posts by that person for years.

While the algorithm rewards pictures of newborns and puppies, it is also inclined to promoting news stories—including fake ones—likely to elicit a reaction. That’s what occurred prior to the 2016 election when stories from sites in Macedonia, masquerading as U.S. conservative news sites, went viral on Facebook. (The sites in questions were run by teenagers seeking to make money from ads.)

Today, the problem of fake news circulating on Facebook is just as prevalent—and possibly more dangerous. This week, the New York Times listed four false election stories circulating widely on Facebook, including a baseless rant about an impending Democratic coup that has been viewed nearly 3 million times. Another example, this one trending in left-wing circles, is a fake report about a mysterious cabal that’s blocking mailboxes to discourage voting. And last month, Facebook users circulated stories (likewise fake) that radical leftists were setting the wildfires in the West. The ensuing hysteria led to sheriffs’ offices and firefighters wasting critical time and resources on nuisance calls.

Until now, Facebook has responded to this sort of viral misinformation by pointing to its team of fact checkers it employs, which can result in Facebook taking down some stories or placing a warning label on them. Critics, however, say the process is feckless because any response typically comes days later—meaning the stories have already reached an enormous audience. Or, as the axiom goes, “[Facebook’s] lie has gone halfway around the world before the truth has had a chance to get its pants on.”

This situation led the Center for American Progress, a Washington think tank, to include circuit breakers as its first recommendation in a landmark report on how social media platforms can reduce misinformation. The idea has also been endorsed by GMFUS, another policy think tank.

Facebook’s new tool to stop fake news is a game changer—if

“Circuit breakers like those used by high-frequency traders on Wall Street would be a way for them to pause algorithmic promotion before a post does damage,” says Karen Kornbluh, a policy expert at GMFUS. “It gives them time to decide if it violates their rules. They don’t need to take it down, but they can stop promoting it.”

Circuit breakers thus appear to be the best of all worlds: They allow Facebook to limit the spread of misinformation without taking the draconian step of removing a post altogether.

And indeed, that’s what Facebook did on Tuesday when spokesperson Andy Stone declared that the company was responding to the suspect Hunter Biden story by “reducing its distribution” while fact checkers investigated its veracity. It deployed a circuit breaker.

But it’s far from clear if circuit breakers will be a regular part of Facebook’s misinformation strategy, or if the Hunter Biden decision will stand instead as a rare exception to Facebook’s practice of letting fake news flow freely on its platform.

Can Facebook change a viral business model?

Facebook’s use of a circuit breaker is one of several encouraging steps the platform has taken this month to limit misinformation, including a ban on posts that deny or distort the Holocaust. But there are reasons to be skeptical.

As a scathing new profile of Facebook in the New Yorker observes, “The company’s strategy has never been to manage the problem of dangerous content, but rather to manage the public’s perception of the problem.”

In the case of circuit breakers, the company has been cagey about how widely they are being deployed. In an interview with Fortune, a Facebook spokesperson noted that, in most cases, few will notice when the company uses them. The spokesperson, who spoke on condition of anonymity, also cited a recent example—one involving an audio post suggesting right-wing activists run over protesters with cars—of the circuit breaker working.

But the spokesperson did not explain why the circuit breakers did not slow down the four fake stories cited by the New York Times, or provide any data about how often they have been used. Instead, she said, the system served as a backup for Facebook’s policy-based moderation tools, which she claimed do an effective job of screening for noxious content—a proposition that many critics would disagree with.

Facebook’s reluctance to elaborate is perhaps understandable. Republicans, responding to Facebook’s decision to temporarily limit the Biden story, warned they will make it easier for people to sue the company over the content its users post. In a hyper-partisan climate, any steps Facebook takes may leave it open to accusations of bias and political retaliation.

Meanwhile, Facebook has another incentive not to use circuit breakers in a meaningful way: Doing so would mean less “engagement” on its platform and, by extension, less ad money. In the view of one critic cited in the New Yorker profile, Facebook’s “content-moderation priorities won’t change until its algorithms stop amplifying whatever content is most enthralling or emotionally manipulative. This might require a new business model, perhaps even a less profitable one.”

The critic, a lawyer and activist named Cori Crider, went on to suggest that Facebook is unlikely to make such a change in the absence of regulation. The company, meanwhile, has yet to offer a convincing answer about how it plans to reconcile this tension between an ethical duty to limit the spread of misinformation, and the fact it makes money when such misinformation goes viral.

Kornbluh of GMFUS says this tension is what leads Facebook and other social media platforms to err on the side of waiting—meaning harmful posts can earn millions of views before any action is taken. She argues that this approach must change, and that circuit breakers offer the potential to do enormous good with little harm.

“A circuit breaker approach wouldn’t force them to deny anyone the right to post—but would deny them amplification,” she says.

This story was originally featured on Fortune.com

Continue Reading