How a Democrat plan to reform Section 230 could backfire

Over the last few years, Section 230 of the 1996 US Communications Decency Act has metamorphosed from a little-known subset of regulations about the internet into a major rallying point for both the right and left. So when Democrats unveiled their attempt to overhaul the law on Friday, the technology world took notice.

There have been other suggestions of how to change Section 230, and many threats from President Trump while he was still in office—but the bill, announced on Friday by Senators Mark Warner, Maizie Hirono, and Amy Klobuchar, appears to be the most significant step yet towards genuinely reforming it.

Many of the changes put forward in the bill, which is known as the SAFE TECH Act, are significant. Right now, the law shields platforms such as Facebook and Twitter from most liability for messages written by their users; the new bill strips many of those protections away. Some are based on existing federal laws: for example, that immunity would not apply to online speech which violated civil rights or cyberstalking laws. The proposals also remove protection for any kind of paid speech, such as advertising.

This, say supporters, is important and welcome progress.

“There is no legal mechanism that has done more to insulate intermediaries from legal accountability for distributing, amplifying, and delivering unlawful content and facilitating dangerous antisocial connects,” says Olivier Sylvain, a professor of law at Fordham University who says he likes the bill—and particularly its potential to regulate online advertising.

When platforms moderate racist, misogynistic or extremist content, he says, “it is largely due to fear of bad publicity or the occasional pushback they get from weary advertisers.”

But many experts think that the reforms are misguided—and could make the situation far worse.

“What both politicians and the public are getting wrong,” says Eric Goldman, professor of law at Santa Clara University, is that “Section 230 reform won’t stick it to Big Tech. Section 230 reform will deepen the incumbents’ competitive moats to make it even harder for new entrants to compete.”

“What services do they think will still qualify?

Goldman is among a large number of legal experts and industry observers who worry that the proposals will not force larger companies to behave better, but will instead crush smaller companies under the weight of complaints and expensive lawsuits.

Critics are concerned that larger companies will simply start filtering out many kinds of legitimate speech to avoid lawsuits, and that the changes aimed at advertising will potentially harm anyone providing paid services, such as web hosting companies or email providers.

“If we don’t have clear and convincing answers to those questions, then the bill creates potentially dire consequences for the internet we know and love.”

Eric Goldman, Santa Clara University

“My question for the drafters is: what services do they think will still qualify for Section 230 if this reform goes through; how likely is it that those services will do what the members of Congress want; and will those services be able to afford to remain in business?” asks Goldman. “If we don’t have clear and convincing answers to those questions, then the bill creates potentially dire consequences for the internet we know and love.”

Despite this, the proposals will be impossible to ignore because the Democrats are in effective control of the White House and both houses of Congress. That means this has to be taken seriously even if it has flaws, says Berin Szoka, the founder and president of the thinktank TechFreedom.

“Everyone gets very frustrated because there are so many stupid takes from Republicans, but this is a much better, more serious attempt to change the law,” he says. “But that doesn’t mean it’s a good idea, or that they’ve thought through what they’re doing.”

“Open the door to loopholes”

Broadly speaking, both major American political parties believe that social media should be better regulated, and that Section 230 is the key to doing so. But their reasoning and suggestions of what to do are very different.

The left thinks changes to the law are required to increase the responsibility of social media platforms for offensive, abusive, or illegal content they host and promote. The right, meanwhile, is largely concerned with claims of censorship, and believes that private companies should be forced into a stance of political neutrality to protect conservative speech. This difference is one reason that both sides appeared to exist in almost entirely different worlds when tech CEOs were hauled to testify to the Senate last year.

The problem of online abuse and misinformation became impossible to ignore over the last year, with harmful online conspiracy theories fueling the pandemic, and political lies threatening the election. That culminated in January, when the violent assault on the US Capitol was fanned by online groups and by Trump himself.

But while these issues are very real, Szoka says some attempts to shift the boundaries of what is protected by Section 230 may backfire. The civil rights carve out, for example, may be well-intentioned but could lead to a worsening situation as those who support political extremism try to turn their beliefs into protected speech.

“As far as I can tell, it opens to the doors to loopholes because it’s not just about federally-protected classes like race, sex, and age; it covers states too,” he says. “In some states, political affiliation is already a protected class, and you could get more Republican states passing these kinds of laws to prevent regulation. I can guarantee this is not what the authors of the bill had in mind.”

How should it be fixed?

The proposals are a “recipe for a bit of a mess” agrees Jonathan Zittrain, Professor of International Law at Harvard Law School.

He suggests that it may be more important to come up with common standards “to establish what is or isn’t actionable” to make sure that frivolous cases from ill-intentioned complainants do not get turned into vast, expensive lawsuits.

Joan Donovan, a disinformation expert at Harvard’s Shorenstein Center, who also writes for MIT Technology Review, says that deeper reforms are needed to change incentives, rather than just a focus on changing liability.

“I think we need to conduct some research to analyze how to break up the tech industry into lower-risk business models,” she says. “It would be like the Glass-Steagall of our time [the law that separated commercial banking from investment banking in the 1930s]. Barriers that limit how tech companies can acquire, use, and sell data alongside clear public interest obligations would also make misinformation-at-scale less of a national security threat.”

While the changes proposed in the SAFE TECH Act are significant, they do not perform this kind of root and branch overhaul.

In the meantime, supporters believe this legislation can force larger companies to confront some of the worst aspects of online behavior, though it seems to be more hope than expectation.

As Sylvain notes, companies like Facebook sell their services to advertisers based on the promise of being able to microtarget users for messaging—even when that may be in direct contravention of anti-discrimination laws.

“Policymakers and others tend to overstate the extent to which online intermediaries are nothing more than platforms for user speech,” he says. “They are not. They are engines for optimizing user engagement which, in turn, fuels their bottomline interest in advertising revenue. The more salacious and outrageous the content, the more people want to watch, in spite of themselves. The SAFE TECH Act accordingly lifts the immunity for advertisements and other paid content.”

“My hope is that this has the salutary effect of instilling a greater sense of social responsibility in the way in which online intermediaries design their services.”

Main Menu