A hot potato: Concerns over the sort of material AI can generate could be a problem for those behind the systems if new legislation is passed exempting artificial intelligence from Section 230. The bill would hold the likes of OpenAI and Google responsible for problematic, AI-generated content along with those who create it.
Republican senator Josh Hawley and his Democrat counterpart Richard Blumenthal introduced the No Section 230 Immunity for AI Act. As the name suggests, the bipartisan legislation, if passed, would ensure AI companies cannot benefit from Section 230 protections, making them liable for false and possibly defamatory content created by LLMs and other generative AI.
"AI companies should be forced to take responsibility for business decisions as they're developing products – without any Section 230 legal shield," said Blumenthal.
One of the areas highlighted in the press release is deepfakes. Being able to digitally edit someone's image to make them appear in compromising or sexually explicit pictures/videos has been around for years. But advancements in AI have made deepfakes look more convincing than ever. The FBI recently issued a warning over the increasing number of sextortionists creating explicit deepfakes from people's social media images.
Section 230 of the Communications Decency Act, passed in 1996, says that an "interactive computer service" can't be held liable for third-party content as it isn't the publisher of that material. It means that barring a few exceptions, companies behind social media platforms like Facebook and Twitter, as well as search engines and forums, can't be sued for user-generated content.
"We can't make the same mistakes with generative AI as we did with Big Tech on Section 230," said Senator Hawley. "When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality."
The No Section 230 Immunity for AI Act would amend Section 230 by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI. It would also allow Americans harmed by generative AI models to sue AI companies in federal or state court.
"AI companies should be forced to take responsibility for business decisions as they're developing products – without any Section 230 legal shield," said Blumenthal. "This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public."
At the start of 2020, President Biden said he wanted to make Facebook liable for users' content by revoking Section 230. A Google lawyer later claimed the internet would become a "horror show" without it. He'll be happy that Section 230 still around and has managed to fight off legal attempts to narrow its scope: the supreme court recently declined to hear a bid to sue Reddit over child porn on the platform.
Internet companies can't always hide behind Section 230, though, as Google found out after refusing to remove links that falsely claimed a man was a pedophile.
There's a long way to go before the bill could become law, of course. There have been repeated calls for governments to introduce AI regulation, even from OpenAI boss Sam Altman, but losing Section 230 protections would have monumental consequences for AI companies.