Monday, January 11, 2021

section 230

https://www.eff.org/deeplinks/2020/12/section-230-good-actually


Section 230 only shields an intermediary from liability that already exists. If speech is protected by the First Amendment, there can be no liability either for publishing it or republishing it, regardless of Section 230. As the Supreme Court recognized in the Reno v. ACLU case, the First Amendment’s robust speech protections fully apply to online speech. 


Section 230 was included in the CDA to ensure that online services could decide what types of content they wanted to host. Without Section 230, sites that removed sexual content could be held legally responsible for that action, a result that would have made services leery of moderating their users’ content, even if they wanted to create online spaces free of sexual content. The point of 230 was to encourage active moderation to remove sexual content, allowing services to compete with one another based on the types of user content they wanted to host. 


Moreover, the First Amendment also protects the right of online platforms to curate the speech on their sites—to decide what user speech will and will not appear on their sites. 


So Section 230’s immunity for removing user speech is perfectly consistent with the First Amendment. This is apparent given that prior to the Internet, the First Amendment gave non-digital media, such as newspapers, the right to decide what stories and opinions it would publish.


No, online platforms are not “neutral public forums.”


Nor should they be. Section 230 does not say anything like this. And trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would violate the First Amendment. The Supreme Court has confirmed the fundamental right of publishers to have editorial viewpoints. 


It’s also foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. One of the reasons why Congress first passed Section 230 was to enable online platforms to engage in good-faith community moderation without fear of taking on undue liability for their users’ posts. 


In two important early cases over Internet speech, courts allowed civil defamation claims against Prodigy but not against Compuserve; since Prodigy deleted some messages for “offensiveness” and “bad taste,” a court reasoned, it could be treated as a publisher and held liable for its users’ posts. 


Former Rep. Chris Cox recalls reading about the Prodigy opinion on an airplane and thinking that it was “surpassingly stupid.” That revelation led to Cox and then Rep. Ron Wyden introducing the Internet Freedom and Family Empowerment Act, which would later become Section 230. 


In practice, creating additional hoops for platforms to jump through in order to maintain their Section 230 protections would almost certainly result in fewer opportunities to share controversial opinions online, not more: under Section 230, platforms devoted to niche interests and minority views can thrive. 


Print publishers and online services are very different, and are treated differently under the law–and should be.


It’s true that online services do not have the same liability for their content that print media does. Unlike publications like newspapers that are legally responsible for the content they print, online publications are relieved of this liability by Section 230. The major distinction the law creates is between online and offline publication, a recognition of the inherent differences in scale between the two modes of publication. (Despite claims otherwise, there is no legal significance to labeling an online service a “platform” as opposed to a “publisher.”) 


But an additional purpose of Section 230 was to eliminate any distinction between those who actively select, curate, and edit the speech before distributing it and those who are merely passive conduits for it. Before Section 230, courts effectively disincentivized platforms from engaging in any speech moderation. Section 230 provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation. 

The misconception that platforms can somehow lose Section 230 protections for moderating users’ posts has gotten a lot of airtime. This is false.

Section 230 allows sites to moderate content how they see fit.

And that’s what we want: a variety of sites with a plethora of moderation practices keeps the online ecosystem workable for everyone. The Internet is a better place when multiple moderation philosophies can coexist, some more restrictive and some more permissive.


No comments:

Post a Comment