July 8, 2020 — Despite the hate that big tech companies receive, social media platforms are essential in enabling online speech at the level seen today, with the help of Section 230, said Cathy Gellis, an attorney who specializes in internet intermediary law.
Gellis made her remarks at the second event in a Broadband Breakfast Live Online series examining facts and fictions about Section 230, sponsored by the Computer & Communications Industry Association.
“If people had to be afraid of what you expressed, then they would have to jump in and possibly not let you express it,” she said. “So if we’re going to have this ideal that people can speak freely, we need to protect the people who enable this expression, or else it’ll be death by duck nibbles and we’ll get sued out of existence.”
TechFreedom Senior Fellow Berin Szóka compared Section 230 to anti-SLAPP laws, which aim to prevent people from using the court system to intimidate others who are exercising their First Amendment rights. The acronym stands for “Strategic Lawsuit Against Public Participation.”
Although the speech in question is already protected by the First Amendment, Szóka explained, anti-SLAPP laws and Section 230 both curb the threat of costly litigation and criminal prosecution.
Szóka also emphasized the limitations of Section 230’s protection pointing out that it only protects interactive computer service providers from content that they are not responsible for creating, even in part.
“If you’re Backpage.com, and you hire a company to draft sex trafficking ads, you’re not protected by Section 230,” he explained. “If you’re Roommates.com, and you solicit racially discriminatory preferences for housing, you’re not protected by Section 230. If you’re Twitter, and you add your own content next to someone else’s content — like labels on the President’s tweets — those aren’t protected by Section 230.”
Section 230 plays a major role in explaining why social media companies have flourished almost exclusively within the United States, Szóka said.
Certain other countries instead use the “innocent dissemination defense,” which protects platforms only if they are entirely unaware of the content in question.
“That’s exactly what Congress was trying to avoid [by creating Section 230], because if that’s the only safe harbor you have from liability, you have a perverse incentive not to monitor your service and not to try to moderate content to protect users from harmful content,” Szóka said.
Tech journalist Rob Pegoraro pointed out the irony in so-called conservatives calling for increased government regulation of private companies.
“[They’ve] decided it’s worthwhile to say, ‘Look at how unfair Twitter and Facebook are, they’re censoring us, they’re shadow banning,’” he said. “It’s just crazy when you look at how well President Trump has used Facebook and Twitter as platforms.”
One of the most common misconceptions about Section 230, according to Pegoraro, is that it requires platforms to be politically neutral.
“There is no way you get that from a reading of the law,” he said. “But I have seen politicians [like] Senator Josh Hawley of Missouri, who has a degree from Yale — I hear they’re pretty good at teaching law — who has said you must be politically neutral or you don’t get these protections, which is bananas.”
In addition, “bad faith” behavior is hard to define, Pegoraro said. While many have claimed that social media platforms discriminate against Republicans, Pegoraro said he saw no evidence of that.
On the other side of the aisle, “it could be the issue that Facebook says it doesn’t want to be an arbiter of truth, but in fact, it is quite happy to kick people off of Facebook if they happen to be activists in Tunisia or Palestine, which I think is a legit complaint,” he said.
Democratic Presidential Nominee Joe Biden has previously expressed support for the repeal of Section 230, citing the rampant spread of potentially dangerous misinformation online.
“The way in which blatant falsehood is spreading on the internet is a real problem,” Szóka said. “But I have yet to see a legal solution to that problem, because at the end of the day, these services, whatever their role is in moderating content, they’re not courts, they’re not grand juries, they don’t have the capacity to investigate.”
“Forcing them by law to make decisions about taking down certain content or certain users not only raises obvious First Amendment problems, but is hopelessly impossible,” he added.
Gellis described Section 230 as being “all carrot, no stick,” arguing that this was important in order to empower companies to balance leaving the most good stuff up and taking the most bad stuff down.
In spite of the law’s essential role, she said, politicians have begun pointing to it as a scapegoat for all online problems, when they should instead be focusing on creating more support for the regulatory ecosystem as a whole.
“The problem is [that Section 230 is] kind of the only good law we really have in the internet world,” she said.
Szóka pointed out that much of the controversy over Section 230 is relatively recent.
“2016 is probably the tipping point [of] when it went from being pretty clear among anybody who was in internet policy that we just wouldn’t have the user centric internet without Section 230 to Section 230 becoming the political football that it is today,” he said.
Broadband Breakfast’s series on Section 230 will conclude next Wednesday with a discussion on public input on platform algorithms, considering the role of transparency and feedback in information technology.