from the a-blumenthal/hawley-specialty dept
Over the past few days I’ve been hearing lots of buzz claiming that either today or tomorrow Senator Josh Hawley is going to push to “hotline” the bill he and Senator Richard Blumenthal introduced months back to explicitly exempt AI from Section 230. Hotlining a bill is basically an attempt to move the bill quickly by seeking unanimous consent (i.e., no one objecting) to a bill.
Let me be extremely explicit: this bill would be a danger to the internet. And that’s even if you hate both AI and Section 230. We’ve discussed this bill before, and I explained its problems then, but let’s do this again, since there’s a push to sneak it through.
First off, there remains an ongoing debate over whether or not Section 230 actually protects the output of generative AI systems. Many people say it should not, arguing that the results are from the company in question, and thus not third party speech. Lawyer Jess Miers made the (to me) extremely convincing case as to why this was wrong.
In short, the argument is that courts have already determined that algorithmic output derived from content provided by others is protected by Section 230. This has been true in cases involving things like automatically generated search snippets or things like autocomplete. And that’s kind of important or we’d lose algorithmically generated summaries of search results.
From there, you now have to somehow distinguish “generative AI output” from “algorithmically generated summaries” and there’s simply no limiting principle here. You’re just arbitrarily declaring some algorithmically generated content “AI” and some of it… not?
I remain somewhat surprised that Section 230’s authors, Ron Wyden and Chris Cox, have enthusiastically supported the claim that 230 shouldn’t protect AI output. It seems wrong on the law and wrong on the policy as noted above.
Still, Senators Hawley and Blumenthal introduced this bill that would make a mess of everything, because it’s drafted so stupidly and so poorly that it should never have been introduced, let alone be considered for moving forward.
First of all, if Wyden and Cox and those who argue 230 doesn’t apply are right, then this bill isn’t even needed in the first place, because the law already wouldn’t apply.
But, more importantly, the way the law is drafted would basically end Section 230, but in the dumbest way possible. First the bill defines generative AI extremely broadly:
GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial intelligence’ means an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’
That’s the entirety of the definition. And that could apply to all sorts of technology. Does autocomplete meet that qualification? Probably. Arguably, spellchecking and grammar checking could as well.
But, again, even if you could tighten up that definition, you’d still run into problems. Because the bill’s exemption is insanely broad:
‘‘(6) NO EFFECT ON CLAIMS RELATED TO GENERATIVE ARTIFICIAL INTELLIGENCE.—Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.’’;
We need to break down the many problems with this. Note that the exemption from 230 here is not just on the output of generative AI. It’s if the conduct “involves the use or provision” of generative AI. So, if you write a post, and an AI grammar/spellchecker suggests edits, then the company is no longer protected by Section 230?
Considering that AI is currently being built into basically everything, this “exemption” will basically eat the entire law, because increasingly all content produced online will involve “the use or provision” of generative AI, even if the content itself has nothing to do with the service provider.
In short, this bill doesn’t just strip 230 protections from AI output, in effect it strips 230 from any company that offers AI in its products. Which is basically a set of internet companies rapidly approaching “all of them.” At the very least, plaintiffs will sue and claim that the content had some generative AI component just to avoid a 230 dismissal and drag the case out.
Then, because you can tell an AI-based systems to do something that violates the law, you can automatically remove all 230 protections from the company. Over at R Street, they give an example where they deliberately convince ChatGPT to defame Tony Danza.
And, under this law, doing so would open up OpenAI to liability, even though all it was doing was following the instructions of the users.
Then there’s a separate problem here. It creates a massive state law loophole. As we’ve discussed for years, for very good reasons, Section 230 preempts any state laws that would undermine it. This is to prevent states from burdening the internet with vexatious liability as a punishment (something that is increasingly popular across the political spectrum as both major political parties seek to punish companies for ideological reasons).
But, notice that this exemption deliberately carves out “state law.” That would open the floodgates to terrible state laws that introduce liability for anything related to AI, and again help to effectively strip any protections from companies that offer any product that has AI. It would enable a ton of mischief from politically motivated states.
The end result would harm a ton of internet speech, because when you add liability, you get less of the thing you add liability to. Companies would be way less open to hosting any kind of content, especially content that has any algorithmic component, as it opens them up to liability under this law.
It would also make so many tools too risky to offer. Again, this could include things as simple as spelling and grammar checkers, as such tools might strip the companies and the content from any kind of 230 protections.
I mean, you could even see scenarios like: if someone were to post a defamatory post that includes an unrelated generative AI image to Facebook, the defamed party could now sue Meta, rather than the person doing the defamation. Because the use of generative AI in the post would strip Meta of the 230 protections.
So, basically, under this law, anyone who wants to get any website in legal trouble just has to post something defamatory and include some generative AI content with it, and the company loses all 230 protections for that content. At the very least, this would lead companies to be quite concerned about allowing any content that is partially generated by AI on their sites, but it’s difficult to see how one would even police that?
Thus, really, you’re just adding liability and stripping 230 from the entire internet.
Again, even if you think AI is problematic and 230 needs major reform, this is not the way to do that. This is not a narrowly targeted piece of legislation. It’s a poorly drafted sledgehammer to the open internet, at least in the US. Section 230 was the key to the US becoming a leader in the original open internet. American companies lead the internet economy, in large part because of Section 230. As we enter the generative AI era, this law would basically be handing the next technology revolution to any other country that wants it, by adding ruinous liability to companies operating in the US.
Filed Under: generative ai, josh hawley, liability, richard blumenthal, section 230