Alexandra Zimmer | Are Algorithms Liable Under §230?

BACKGROUND

Federal courts have held for decades that interactive computer services cannot be considered publishers for the purpose of determining liability to users for content published by third parties. This area of the law was developed in the infancy of the internet, and technology has advanced leaps and bounds faster than the law has.

The relevant statute in these cases is 47 U.S.C. § 230 (the Communications Decency Act of 1996), which states that “no provider or use of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In lay terms, online media platforms (think YouTube, Facebook, Twitter, etc.) that host content created and/or shared by third-party users cannot be held liable unless the provider is directly responsible for creating it. Early after its inception, one of the recognized benefits of the Communications Decency Act being passed was the protections to liability it provided to interactive computer service providers who published actionable material from third parties.

With the development of the internet and these interactive computer services dominating forms of communication, there have been growing questions about the application of this statute and the role that covered internet service providers play in disseminating third-party user materials. Perhaps most notable is the impact of automated algorithms employed by interactive computer services that recommend content to site users.

The Supreme Court denied certiorari review in multiple cases that sought clarification on the scope of section 230 in limiting or eliminating liability against the covered internet service providers. Still, it has recently granted certiorari review in Gonzalez v. Google, LLC, a case in the Ninth Circuit that once again seeks clarification on these applications.

ISSUE

Do automated recommendation algorithms used by interactive computer services render them creators rather than publishers to deny protection under Section 230 of the Communications Decency Act of 1996?

THE SPLIT

First, Third, Fourth, Sixth, and Tenth Circuits

The First, Third, Fourth, Sixth, and Tenth Circuits have all adopted the “traditional editorial functions standard” or “material contributor test.” This standard provides that publishers are granted immunity from liability under the Communications Decency Act with respect to activities exercised in the traditional editorial functions of publishers. Such activities include the publication, withdrawal, postponement, or alteration of content received by a third party for distribution on interactive computer services. This ultimately creates broad immunity for many interactive computer services that disseminate third-party content.

In 1997, the Communications Decency Act did exactly what it was constructed to do in Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997) - it barred claims against interactive computer services for allowing defamatory statements to be posted. The plain language of the statute and the legislative history of the statute clearly applied to this case. The Tenth Circuit upheld this publisher editorial function standard in Ben Ezra, Weinstein, & Co. v. Am. Online, Inc., 206 F.3d 980 (10th Cir. 2000) when it granted the defendant interactive computer service’s motion for summary judgment because the plaintiff provided no evidence that the defendant had created the defamatory information rather than merely publish it.

The Sixth Circuit found that the protections of the Communications Decency Act extended to interactive computer service providers who actively encouraged the publication of defamatory material. Jones v. Dirty World Entertainment Recording, LLC, 755 F.3d 398, 407 (6th Cir. 2014). In Jones, the defendant operated a website that allowed users to upload content like comments, photographs, and videos from which the owner-operator of the website could choose posts to publish. The Sixth Circuit held that the defendant, despite active involvement in selecting the material for publication on the website, was also protected from liability by the Communications Decency Act.

Other Circuit Courts have also applied this standard outside of defamation cases. The First Circuit has found that interactive computer services cannot be liable to minor victims of sex trafficking where they post advertisements for escorts from third-party information content providers which feature said minors. Jane Doe 1 v. Backpage.com, LLC, 817 F.3d 12, 20 (1st Cir. 2016). Because Backpage.com had merely published these advertisements, claims by the minors were dismissed.

The Third Circuit extended these protections to claims brought by plaintiffs who alleged interactive computer services failed to provide adequate warnings regarding products sold by third-party sellers on their platforms. Oberdorf v. Amazon.com, Inc., 930 F.3d 136 (3d Cir. 2019). In Oberdorf, the plaintiff was permanently blinded by a retractable dog leash she purchased from a third party via Amazon.com. The Circuit Court held that sellers who use websites such as Amazon.com to sell their merchandise are information content providers under the Communications Decency Act. Therefore, the interactive computer services (in this case, Amazon.com) could not be found liable because of the publisher's editorial function provisions of the Communications Decency Act.

Put simply, these Circuit Courts have applied Section 230 protections to the editorial functions of interactive computer services.

Second Circuit and Ninth Circuit

Over the past few years, the Second and Ninth Circuit Courts have deviated from this “traditional editorial function” standard in favor of expanding immunities of the Communications Decency Act to protect interactive computer services from liability in cases where the provider has utilized an algorithm that promotes certain materials published by third parties. These cases have primarily developed from allegations of claimants that interactive computer services’ algorithms push content promoting illegal activity to users.

In Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2018), the Second Circuit found that algorithms used by Facebook which allegedly pushed content posted by members of the Hamas terrorist organization to users who ultimately caused harm or death to petitioners’ next of kin were within the scope of the protections afforded publishers under the Communications Decency Act so long as the content was provided willingly by a third party. The Second Circuit found that because Facebook did not develop the content, it was protected from liability.

The dissenting opinion in Force noted that the expansion of Section 230 protections into the realm of such algorithms contrasted with the Circuit Courts’ extensive previous applications of the Communications Decency Act to only traditional editorial functions. The dissent further emphasized that such expansions of the statute should be left to the determination of Congress because they are straying from the statute's original intent. 

Similarly, in Dyroff v. Ultimate Software Group, Inc., 934 F.3d 1093 (9th Cir. 2019), the Ninth Circuit applied the Section 230 protections to algorithms on websites that promoted to users other users and chat groups on the site, including groups and users engaging in discussions related to the use of illicit drugs.

More recently, in 2021, the Ninth Circuit ruled in Gonzalez v. Google LLC, 2 F.4th 871 (9th Cir. 2021) that Section 230 of the Communications Decency Act barred claims against Google under the Anti-Terrorism Act after the petitioner’s daughter was killed by terrorists who, according to the complaint, were enabled by Google’s algorithms promoting ISIS videos and advertisements on YouTube. The liability claims against Google were dismissed on the grounds that they were barred by Section 230.

The dissent in Gonzalez argued that Section 230 of the Communications Decency Act did not extend liability in this case and that the statute was never intended to, nor did it represent by its language, to provide immunity for harm that was knowingly caused by the conduct of interactive computer services. Similarly, to the dissent in Force, the Gonzalez dissent also urged the majority to allow the legislature to make such expansions to the Communications Decency Act as were being made here.

Interestingly, the Ninth Circuit was simultaneously deciding the case of Taamneh v. Twitter, Inc., 2021 U.S. App. LEXIS 38153 (9th Cir. 202) when it was deciding Gonzalez. Though not explicitly a Section 230 case in the way Gonzalez is, the Taamneh case is practically identical. It has also been granted certiorari review before the Supreme Court in the upcoming term. The Ninth Circuit in Taamneh decided that Twitter could be held liable for damages resulting from terrorist acts that occurred via their platform.

LOOKING FORWARD

The responsibility of Big Tech companies and social media outlets over who uses their platforms and what they use them for has not been addressed thoroughly by the courts in comparison to the growth of the industry. With Gonzalez being granted certiorari review, the Supreme Court has the opportunity to finally determine the scope of Section 230 protections of the Communications Decency Act.

Suppose the Court rules in favor of the petitioners. In that case, the protections of Section 230 of the Communications Decency Act will return to the strict application to traditional editorial functions and open claims against interactive computer services for harm caused by third-party content specifically promoted through the providers’ algorithms.

Alternatively, the Supreme Court may rule in favor of Google, LLC, and decide to allow for the broad interpretation of Section 230 to preserve these absolute bars on claims against the providers, no matter how harmful the material ultimately is. Until then, Section 230 remains a contested area of the law, and claims for liability against interactive computer services that use algorithms to target site users and promote third-party content to them will likely be barred.

Alexandra Zimmer

Find Alexandra on LinkedIn!

Previous
Previous

Taylor Chervo | Can States Constitutionally Ban Conversion Therapy?

Next
Next

Reese Wilking | Are Disparate Impact Claims Cognizable Under Federal Disability Law?