There is a case moving through the courts right now that seeks to decide, once and for all, which entities qualify for protection under 47 USC Section 230 of the Communications Decency Act (CDA).
Gonzalez v. Google addresses the years-long debate over whether social media platforms like Facebook and Twitter become publishers the moment they start censoring content. It would seem as though active content moderation negates platform status and turns an entity into a publisher that is no longer protected under Section 230.
Big Tech platforms have repeatedly tried to argue that they are not publishers because they simply provide a service. But all that “fact checking” and other interference with content does, in fact, make them publishers – or at least that is what the case argues.
The Supreme Court is scheduled to hear Gonzalez v. Google and hopefully settle this matter in a just and fair way. Justice Clarence Thomas has already made his voice heard on the matter, stating:
“… Section 502 of the Communications Decency Act makes it a crime to ‘knowingly … display’ obscene material to children, even if a third party created that content … It is odd to hold, as courts have, that Congress implicitly eliminated distributor liability in the very Act in which Congress explicitly imposed it.”
In other words, for Big Tech to decide that Section 230 somehow applies as protection from liability when Section 502 negates that liability is erroneous. It all comes down to the definition of knowingly, or actively, versus unknowingly, or passively distributing materials as a content provider.
If Big Tech is going to moderate and censor content, it must be made fully liable for said content as a publisher
Co-authored by Ron Wyden, the intent of Section 230, in his own words, was to provide platforms with a “sword and a shield,” meaning both passive and offensive protection against litigation.
“Section 230 has two distinct protections, a defensive and offensive protection. 230(c)(1) is the defensive ‘Treatment of Publisher or Speaker’ protection (i.e., the shield) and the 230(c)(2) is the offensive ‘Civil Liability’ protection (i.e., the sword),” writes Jason Fyk for Human Events.
“Naturally, a ‘shield’ provides passive (i.e., platform) protection from the actions of another (e.g., ‘the publisher’) while the ‘sword’ provides limited authority for the provider’s or user’s own actions taken against another (e.g., as ‘a publisher’ to restrict materials).”
Fyk goes into great detail explaining the caveats of Section 230 and the CDA, explaining that the courts and the “vast majority of people” have it all wrong in exempting Big Tech from culpability for the content that is both published and censored on social media platforms.
The Ninth Circuit Court of Appeals, Fyk further explains, got it all wrong in 2009 when it ruled that Section (c)(1) of 230 “shields from liability all publication decisions, whether to edit, to remove, or to post, with respect to content generated entirely by third parties.”
Fyk actually sued Facebook over this very issue, addressing the Ninth Circuit’s incorrect ruling. That suit pertained to Fyk’s own content, which was apparently censored by Facebook, making Facebook a “publisher” by this very action.
“I was never holding Facebook accountable for my publishing decisions,” Fyk notes. “I was holding them accountable for their own publishing actions.”
“Courts are misapplying (c)(1)’s passive protection, when the shield is being actively used as an alternative (i.e., secondary) offensive weapon, rendering the purpose of the sword, superfluous. (For a visual representation of this circular process, see: Section 230’s Irreconcilable Loop below.).”
Check out Fyk’s full assessment of Section 230 and the current case before the Supreme Court.
The latest news about Big Tech censorship can be found at Censorship.news.
Sources for this article include: