Can crowdsourced fact-checking curb misinformation on social media?

In a 2019 speech at Georgetown University, Mark Zuckerberg famously declared that he didn’t want Facebook to be an “arbiter of truth.” And yet, in the years since, his company, Meta, has used several methods to moderate content and identify misleading posts across its social media apps, which include Facebook, Instagram, and Threads. These methods have included automatic filters that identify illegal and malicious content, and third-party factcheckers who manually research the validity of claims made in certain posts.

Zuckerberg explained that while Meta has put a lot of effort into building “complex systems to moderate content,” over the years, these systems have made many mistakes, with the result being “too much censorship.” The company therefore announced that it would be ending its third-party factchecker program in the US, replacing it with a system called Community Notes, which relies on users to flag false or misleading content and provide context about it.

While Community Notes has the potential to be extremely effective, the difficult job of content moderation benefits from a mix of different approaches. As a professor of natural language processing at MBZUAI, I’ve spent most of my career researching disinformation, propaganda, and fake news online. So, one of the first questions I asked myself was: will replacing human factcheckers with crowdsourced Community Notes have negative impacts on users?

Wisdom of crowds

Community Notes got its start on Twitter as Birdwatch. It’s a crowdsourced feature where users who participate in the program can add context and clarification to what they deem false or misleading tweets. The notes are hidden until community evaluation reaches a consensus—meaning, people who hold different perspectives and political views agree that a post is misleading. An algorithm determines when the threshold for consensus is reached, and then the note becomes publicly visible beneath the tweet in question, providing additional context to help users make informed judgments about its content.

Community Notes seems to work rather well. A team of researchers from University of Illinois Urbana-Champaign and University of Rochester found that X’s Community Notes program can reduce the spread of misinformation, leading to post retractions by authors. Facebook is largely adopting the same approach that is used on X today.

Having studied and written about content moderation for years, it’s great to see another major social media company implementing crowdsourcing for content moderation. If it works for Meta, it could be a true game-changer for the more than 3 billion people who use the company’s products every day.

That said, content moderation is a complex problem. There is no one silver bullet that will work in all situations. The challenge can only be addressed by employing a variety of tools that include human factcheckers, crowdsourcing, and algorithmic filtering. Each of these is best suited to different kinds of content, and can and must work in concert.

Spam and LLM safety

There are precedents for addressing similar problems. Decades ago, spam email was a much bigger problem than it is today. In large part, we’ve defeated spam through crowdsourcing. Email providers introduced reporting features, where users can flag suspicious emails. The more widely distributed a particular spam message is, the more likely it will be caught, as it’s reported by more people.

Another useful comparison is how large language models (LLMs) approach harmful content. For the most dangerous queries—related to weapons or violence, for example—many LLMs simply refuse to answer. Other times, these systems may add a disclaimer to their outputs, such as when they are asked to provide medical, legal, or financial advice. This tiered approach is one that my colleagues and I at the MBZUAI explored in a recent study where we propose a hierarchy of ways LLMs can respond to different kinds of potentially harmful queries. Similarly, social media platforms can benefit from different approaches to content moderation.

Automatic filters can be used to identify the most dangerous information, preventing users from seeing and sharing it. These automated systems are fast, but they can only be used for certain kinds of content because they aren’t capable of the nuance required for most content moderation.

Crowdsourced approaches like Community Notes can flag potentially harmful content by relying on the knowledge of users. They are slower than automated systems but faster than professional factcheckers.

Professional factcheckers take the most time to do their work, but the analyses they provide are deeper compared to Community Notes, which are limited to 500 characters. Factcheckers typically work as a team and benefit from shared knowledge. They are often trained to analyze the logical structure of arguments, identifying rhetorical techniques frequently employed in mis- and disinformation campaigns. But the work of professional factcheckers can’t scale in the same way Community Notes can. That’s why these three methods are most effective when they are used together.

Indeed, Community Notes have been found to amplify the work done by factcheckers so it reaches more users. Another study found that Community Notes and factchecking complement each other, as they focus on different types of accounts, with Community Notes tending to analyze posts from large accounts that have high “social influence.” When Community Notes and factcheckers do converge on the same posts, their assessments are similar, however. Another study found that crowdsourced content moderation itself benefits from the findings of professional factcheckers.

A path forward

At its heart, content moderation is extremely difficult because it is about how we determine truth—and there is much we don’t know. Even scientific consensus, built over years by entire disciplines, can change over time.

That said, platforms shouldn’t retreat from the difficult task of moderating content altogether—or become overly dependent on any single solution. They must continuously experiment, learn from their failures, and refine their strategies. As it’s been said, the difference between people who succeed and people who fail is that successful people have failed more times than others have even tried.

This content was produced by the Mohamed bin Zayed University of Artificial Intelligence. It was not written by MIT Technology Review’s editorial staff.

Basket

No products in the cart.