Facebook has agreed to follow some of the board’s nonbinding recommendations regarding the Donald Trump suspension. That’s progress. 

Judging from the press releases filling my inbox and the tweets lighting up my timeline, no one is happy with Facebook right now. On Friday, the company issued its response to the Facebook Oversight Board’s recommendations on the indefinite ban of Donald Trump. We learned that Trump’s account is now frozen for precisely two years from his original January 7 suspension date, at which point Facebook will reassess the risks of letting him back on. The response also includes a number of other policy changes. Opinions on the announcement range from calling it a pointless bit of “accountability theater” to suggesting that it’s cowardly and irresponsible. Republicans are, of course, outraged that Trump hasn’t been reinstated.

I confess to finding myself in a different camp. The Oversight Board is performing a valuable, though very limited, function, and the Trump situation illustrates why.

When the board first published its ruling last month, it issued both a binding command—Facebook must articulate a specific action on Donald Trump’s account and could not continue an indefinite suspension—and nonbinding recommendations, most notably that the platform abandon its policy of treating statements by politicians as inherently “newsworthy” and thus exempt from the rules that apply to everyone else. As I wrote at the time, Facebook’s response to the nonbinding part would probably prove more important. It would apply more broadly than to just Trump’s account, and it would show whether the company is willing to follow the Oversight Board’s advice even when it doesn’t have to.

Now we know that the answer to that last question is yes. In its announcement on Friday, Facebook says it is committed to fully following 15 of the 19 nonbinding recommendations. Of the remaining four, it is rejecting one, partially following another, and doing more research on two.

The most interesting commitments are around the “newsworthiness allowance.” Facebook says it will keep the exception in place, meaning it will still allow some content that violates its Community Standards to stay up if it is “newsworthy or important to the public interest.” The difference is that the platform will no longer treat posts by politicians as more inherently newsworthy than posts by anyone else. It is also increasing transparency by creating a page explaining the rule; beginning next year, it says it will publish an explanation each time the exception is applied to content that otherwise would have been taken down.

Let this sink in for a moment: Facebook took detailed feedback from a group of thoughtful critics, and Mark Zuckerberg signed off on a concrete policy change, plus some increased transparency. This is progress!

Now, please don’t confuse this for a complete endorsement. There is plenty to criticize about Facebook’s announcement. On the Trump ban, while the company has now articulated more detailed policies around “heightened penalties for public figures during times of civil unrest and ongoing violence,” the fact that it came up with a two-year maximum suspension seems suspiciously tailored to potentially allow Trump back on the platform just when he’s getting ready to start running for president again. And Facebook’s new commitments to transparency leave much to be desired. Its new explanation of the newsworthiness allowance, for example, provides zero information about how Facebook defines “newsworthy” in the first place—a pretty important detail. Perhaps the case-by-case explanations beginning next year will shed more light, but until then the policy is about as transparent as a fogged-over bathroom window.

Indeed, as with any announcement from Facebook, this one will be impossible to evaluate fully until we see how the company follows through in practice. In several cases, Facebook claims that it’s already following the Oversight Board’s recommendations. This can strain credulity. For instance, in response to a suggestion that it rely on regional linguistic and political expertise in enforcing policies around the world, the company declares, “We ensure that content reviewers are supported by teams with regional and linguistic expertise, including the context in which the speech is presented.” And yet a Reuters investigation published this week found that posts promoting gay conversion therapy, which Facebook’s rules prohibit, continue to run rampant in Arab countries, “where practitioners post to millions of followers through verified accounts.” As the content moderation scholar Evelyn Douek puts it, with many of its statements “Facebook gives itself a gold star, but they’re really borderline passes at best.”

The aspect of Facebook’s response that seems set to draw the most criticism (aside from Republican outrage that Trump remains suspended) is its decision to not follow the recommendation to review its own role in contributing to the violence of January 6 and publish its findings. Critics of the Oversight Board will point to that fact as proof that the board is powerless to tackle the most important issue, namely the extent to which Facebook’s design amplifies the spread of false and dangerous material. As the Knight First Amendment Institute put it in comments submitted to the Oversight Board, “Trump’s statements on and off social media in the days leading up to January 6 were certainly inflammatory and dangerous, but part of what made them so dangerous is that, for months before that day, many Americans had been exposed to staggering amounts of sensational misinformation about the election on Facebook’s platform, shunted into echo chambers by Facebook’s algorithms, and insulated from counter-speech by Facebook’s architecture.”

This strikes me as a reasonable observation that is ultimately unfair. Reasonable because, yes, the engagement-driven algorithms powering Facebook—and other social media platforms, for that matter—are the most powerful forces affecting how content spreads online. Unfair because the Oversight Board was designed to address a different issue, namely the enforcement of Facebook’s content policies. Just because algorithmic amplification is more important than content moderation doesn’t mean content moderation doesn’t matter. That’s why the constant criticism of the Oversight Board as a weak substitute for government regulation only goes so far. Yes, there is a lot that the government can and should do to address the power that social media giants have over American political and economic life, including antitrust enforcement and regulating (dare I say banning?) surveillance advertising. But it is a ludicrous pipe dream to imagine that the US government will ever regulate content policies directly, or that the First Amendment would allow it to. As long as social media companies exist, they will need to decide what is and isn’t allowed on their platforms.

Facebook has been raked over the coals for years for its failures to enforce content rules consistently, fairly, or transparently. It’s a bit strange to now suggest that any steps the company takes to address those shortcomings are fundamentally illegitimate—especially when even the very limited transparency Facebook is offering is light years ahead of what we’re getting from Twitter or YouTube. (Asked for comment, YouTube had no updates on that platform’s own indefinite Trump ban. “We’ll lift the suspension on the Donald J. Trump channel when we determine a decrease in the risk of real world violence,” a spokesperson wrote in an email.)

Corporate self-regulation is never sufficient, but it is always necessary. Traditional media doesn’t adhere to journalistic ethics and ideals of objectivity because Congress forces it to; it does so because of a body of mostly self-imposed norms developed over the past century. When a media organization decides to abandon those norms, you get something like Fox News broadcasting political disinformation to millions of viewers night after night. Social media is incredibly young, in historical terms, and is not going to cede its power over public discourse any time soon. Elizabeth Warren could wave a magic wand and make Facebook spin off Instagram and WhatsApp tomorrow—and it would still have billions of users around the world. It’s essential that the industry develop mature norms around content moderation to fill in the gaps that government regulation can never address.

Is Facebook there yet? No. Is its response to the Oversight Board, despite its disappointing limitations, a sign of progress nonetheless? Yes.


More Great WIRED Stories