Five questions posed by Facebook’s two-year ban on Donald Trump

On Friday, Facebook announced that it would suspend former president Donald Trump from the social network for two years, until at least January 7, 2023, and said he would “only be reinstated if conditions permit.”

The announcement comes in response to recommendations last month from Facebook’s recently created Oversight Board. Facebook had hoped that the board would decide how to handle Trump’s account, but while it upheld the company’s initial decision to ban Trump from the platform for inciting violence on January 6, it punted the long-term decision back to executives in Palo Alto.

The news that Trump would be banned from Facebook for another 19 months was meant to provide some answers on the platform’s relationship with the former president—but instead it leaves many open questions.

Who is this decision supposed to please?

Although the announcement provides some actual rules about how politicians can use Facebook—and some guidance on how those rules will be enforced—the decision to ban Trump for at least two years isn’t going to be its most popular one. Advocacy groups like Ultraviolet and Media Matters, which have long pushed Facebook to ban Trump, released statements saying that anything less than a permanent ban is inadequate. Meanwhile, the people who feel any rule enforcement against conservative politicians is proof that Facebook penalizes conservative content continue to feel that way, despite lots of evidence that, if anything, the opposite is true.  And it leaves open the possibility that Trump will be Back Online in time for the 2024 election cycle. 

What does “newsworthiness” mean now?

Many platforms, including Facebook, have used a “newsworthiness” exception to avoid enforcing their own rules against politicians and world leaders. Facebook’s announcement comes with some changes to how it’ll use that loophole in the future. First, Facebook said, it will publish a notice whenever it applies the rule to an account. And second, it “will not treat content posted by politicians any differently from content posted by anyone else” when applying the rule, which basically means determining whether the public interest in a rule-breaking piece of content outweighs the potential harm of keeping it online. 

Facebook formally introduced this policy in late 2016, after censoring an iconic photo from the Vietnam War because it contained nudity. However, the newsworthiness exception evolved into a blanket exception for politicians, including Trump, which allowed rule-breaking content to stay online because it was considered in the public interest by default. But while this announcement appears to end that blanket protection, it doesn’t get rid of it completely, and it does not address in any more detail how Facebook will determine whether something falls under the exception. 

Who made this decision?

The announcement was authored by Nick Clegg, the company’s vice president of global affairs, but refers throughout to “we.” However, it does not specify who at Facebook was involved in the decision-making process—which is important for transparency and credibility, given the controversial nature of the decision. 

“We know today’s decision will be criticized by many people on opposing sides of the political divide—but our job is to make a decision in as proportionate, fair, and transparent a way as possible,” Clegg wrote. 

Where will Facebook get advice?

The announcement also says that the company will look to “experts” to “assess whether the risk to public safety has receded,” without specifying which experts these will be, what expertise they will bring, or how Facebook (or, again, who at Facebook) will have decision-making authority based on their insights. The Oversight Board, which was intended partly as a way of outsourcing controversial decisions, has already signaled that it does not wish to perform that role.

This means that knowing whose voice will matter to Facebook, and who will have authority to act on the advice, is especially important—particularly given the high stakes. Conflict assessment and violence analysis are specialized fields, and ones in which Facebook’s previous responses do not inspire much confidence. Three years ago, for example, the United Nations accused the company of being “slow and ineffective” in responding to the spread of hatred online that led to attacks on the Rohingya minority in Myanmar. Facebook commissioned an independent report by the nonprofit Business for Social Responsibility that confirmed the UN’s claims.

That report, published in 2018, noted the possibility of violence in the 2020 US elections, and recommended steps that the company could take to prepare for such “multiple eventualities.“ Facebook executives at the time acknowledged that “we can and should do more” But during the course of the 2020 election campaign, after Trump lost the presidency, and in the run-up to the January 6, the company made few attempts to act on those recommendations.

What happens in 2023?

Then there is the limited nature of the ban—and the fact that it may just kick the same conversation down the road until it is possibly even more inconvenient than it already is. Unless Facebook decides to further extend the ban based on its definition of “conditions permitting,” it will lift just in time for the primary season of the next presidential election cycle. What could possibly go wrong?

Main Menu