Subscribe to

Last year, an ambiguous 9th Circuit ruling in Fair Housing Council v. muddied the waters of Section 230 immunity under the Communications Decency Act by suggesting that if a provider selects which content to allow or edits user created content, it could lose immunity for content it misses, and thus be held liable for it. A new en banc 9th Circuit ruling in clarifies the earlier ruling and establishes that neither a) removing objectionable content, nor b) deciding not to post certain content causes a provider to lose CDA immunity for user content that is posted.

9th Circuit SealThe original ruling raised questions for virtual world providers, particularly Linden Lab, which runs Second Life. Linden Lab was, at that time, in the middle of instituting a policy of removing certain “broadly offensive” user-created content from its world — the company’s first real attempt to regulate any aspect of Second Life. Since then, Linden Lab clarified that policy, and also took it upon itself to remove casino equipment, banking equipment, objects that allegedly infringe trademarks and copyrights, references to sexual ageplay and gambling in user profiles and place locations, the words “Lolita” and “Poker” from classified ads (regardless of context) and much more. Each of these decisions, under the previous ruling, potentially exposed Linden Lab to increased liability.

Linden Lab was, presumably, gambling that the 9th Circuit would ultimately clarify this ruling in favor of immunity for providers who edit and filter some content. That gamble appears to have been a good one.

Under the new ruling, providers are explicitly not held responsible for material they miss when they edit for objectionable content. This means that the gap in immunity last year’s ruling seemed to leave for providers who “edit” content has been closed, and my analysis last year that the former ruling could leave Linden Lab on the hook for editing user content to remove references to sexual ageplay is no longer a concern. Assuming they meet all the other requirements of the law, virtual world providers are not on the hook for any content they miss when they edit user profiles — and even user-owned land and objects — to remove objectionable content.

Details of the Ruling

The ruling here actually finds potentially liable for some aspects of the content on the site (drop down boxes encouraging users to state a sexual orientation preference for their roommate, possibly in violation of the Fair Housing Act) but more interestingly, the court takes the opportunity to make the limits of the ruling explicitly clear:

In an abundance of caution, and to avoid the kind of misunderstanding the dissent seems to encourage, we offer a few examples to elucidate what does and does not amount to “development” under section 230 of the Communications Decency Act: [...] A website operator who edits user-created content—such as by correcting spelling, removing obscenity or trimming for length — retains his immunity for any illegality in the user-created content, provided that the edits are unrelated to the illegality. However, a website operator who edits in a manner that contributes to the alleged illegality — such as by removing the word “not” from a user’s message reading “[Name] did not steal the artwork” in order to transform an innocent message into a libelous one—is directly involved in the alleged illegality and thus not immune.


[T]here will always be close cases where a clever lawyer could argue that something the website operator did encouraged the illegality. Such close cases, we believe, must be resolved in favor of immunity, lest we cut the heart out of section 230 by forcing websites to face death by ten thousand duck-bites, fighting off claims that they promoted or encouraged—or at least tacitly assented to—the illegality of third parties.


[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under section 230.

This ruling makes it explicitly clear that editing for obscenity or other objectionable material and excluding material submitted for posting does not impact a provider’s immunity.


Last year, I took a fairly hard stance against Linden Lab for their policy encouraging residents to report “broadly offensive” content for removal partly because I felt that it potentially left them open to liability under the earlier ruling, and partly because the policy was inarticulate and ambiguous (productions of Roots and The Color Purple could not occur in Second Life under the policy as it was written). Shortly after bringing in what appears to be a solid in-house legal team later in the year, they fixed the ambiguities, and that alleviated many of my concerns. More importantly, though, this ruling clarifies Linden Lab’s immunity under the CDA, and it makes it clear that my concern that the company might be exposing itself to increased liability under the former ruling was unfounded.

The 9th Circuit got this right. Holding a provider accountable for content they miss when they edit for obscenity discourages editing for obscenity, and runs counter to the legislative history of Section 230, which was explicitly crafted “to overrule … decisions which have treated such providers … as publishers or
speakers of content that is not their own because they have restricted access to objectionable material.”

There may or may not be good reasons to object to Linden Lab’s stance on various roleplaying groups, political movements, and lifestyles — this post is not addressing that — but the potential for increased legal liability associated with editing user created content to remove objectionable material can no longer reasonably be said to be among those reasons under this ruling.

Email This Post Email This Post
Print This Post (Printer Friendly Formatting) Print This Post (Printer Friendly Formatting)

Related Posts on Virtually Blind

Leave a Reply

Notes on Comments: Your first comment must be manually approved, but after it is you'll be able to post freely with the same name and email. You can use some HTML (<a> <b> <i> <blockquote> etc.) but know that VB's spam blocker holds posts with five or more <a> links. VB supports gravatars. Got a gravatar? Use the associated email and it'll show with your comment. Need one? Set it up for free here.