Senate bill would hold AI companies responsible for generated content by ending Section...

midian182

Posts: 9,745   +121
Staff member
A hot potato: Concerns over the sort of material AI can generate could be a problem for those behind the systems if new legislation is passed exempting artificial intelligence from Section 230. The bill would hold the likes of OpenAI and Google responsible for problematic, AI-generated content along with those who create it.

Republican senator Josh Hawley and his Democrat counterpart Richard Blumenthal introduced the No Section 230 Immunity for AI Act. As the name suggests, the bipartisan legislation, if passed, would ensure AI companies cannot benefit from Section 230 protections, making them liable for false and possibly defamatory content created by LLMs and other generative AI.

"AI companies should be forced to take responsibility for business decisions as they're developing products – without any Section 230 legal shield," said Blumenthal.

One of the areas highlighted in the press release is deepfakes. Being able to digitally edit someone's image to make them appear in compromising or sexually explicit pictures/videos has been around for years. But advancements in AI have made deepfakes look more convincing than ever. The FBI recently issued a warning over the increasing number of sextortionists creating explicit deepfakes from people's social media images.

Section 230 of the Communications Decency Act, passed in 1996, says that an "interactive computer service" can't be held liable for third-party content as it isn't the publisher of that material. It means that barring a few exceptions, companies behind social media platforms like Facebook and Twitter, as well as search engines and forums, can't be sued for user-generated content.

"We can't make the same mistakes with generative AI as we did with Big Tech on Section 230," said Senator Hawley. "When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality."

The No Section 230 Immunity for AI Act would amend Section 230 by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI. It would also allow Americans harmed by generative AI models to sue AI companies in federal or state court.

"AI companies should be forced to take responsibility for business decisions as they're developing products – without any Section 230 legal shield," said Blumenthal. "This legislation is the first step in our effort to write the rules of AI and establish safeguards as we enter this new era. AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public."

At the start of 2020, President Biden said he wanted to make Facebook liable for users' content by revoking Section 230. A Google lawyer later claimed the internet would become a "horror show" without it. He'll be happy that Section 230 still around and has managed to fight off legal attempts to narrow its scope: the supreme court recently declined to hear a bid to sue Reddit over child porn on the platform.

Internet companies can't always hide behind Section 230, though, as Google found out after refusing to remove links that falsely claimed a man was a pedophile.

There's a long way to go before the bill could become law, of course. There have been repeated calls for governments to introduce AI regulation, even from OpenAI boss Sam Altman, but losing Section 230 protections would have monumental consequences for AI companies.

Permalink to story.

 
It's a lose-lose game for us all. It will stall progress as far as the content creation goes, which is bad, but it will not stall AI development, feared by many, which is also bad. It will fill lawyers' fodder, and I'm sure they are happy, not so much the rest of people.
 
As VitalyT stated this is a lose-lose. I do not work for any of the big tech companies so I do not know how difficult it is to police their content. As a developer, I know that I do not catch all the problems and hope that if someone comes across a problem, that they will tell me.

The problem that I see with holding them accountable for what others do is then should we not hold other companies accountable? For example in a DUI, the alcohol and automobile manufacture should be held accountable. In the case of a murder by using "X" object, then that manufacture should be held accountable. Anytime a person drowns in a pool and so on. I am sure you guys could even think of silly ideas too.
 
As VitalyT stated this is a lose-lose. I do not work for any of the big tech companies so I do not know how difficult it is to police their content. As a developer, I know that I do not catch all the problems and hope that if someone comes across a problem, that they will tell me.

The problem that I see with holding them accountable for what others do is then should we not hold other companies accountable? For example in a DUI, the alcohol and automobile manufacture should be held accountable. In the case of a murder by using "X" object, then that manufacture should be held accountable. Anytime a person drowns in a pool and so on. I am sure you guys could even think of silly ideas too.
Yup. It just sounds like a "we're totally doing something good, think favorably of us". And then the majority of people (who don't even understand what this will do) will...
 
Sure like we hold gun companies accountable for what someone does with their weapons. Returdicans would rather die than highlight that hypocrisy.
 
It's a lose-lose game for us all. It will stall progress as far as the content creation goes, which is bad, but it will not stall AI development, feared by many, which is also bad. It will fill lawyers' fodder, and I'm sure they are happy, not so much the rest of people.

IMO, that would not be a bad thing: stalling AI development.

They all need some more time in the over before they're ready for prime time; they're ~95% of the way there, but that last 5% is pretty significant. Even ignoring malicious use, AI still generates a lot of 'confidentially incorrect' responses to genuine, good-faith inquiries.

Meanwhile, losing section 230 protection would effectively force AI algorithms back into university laboratories and hobbyist computers, where development will be 'pure' instead of seeking profit before its 100% ready to.

No, the real challenge will be where to draw the lines. Does autocorrect count as "AI"? What about Grammarly and similar services? What about publication - if you generate 'bad' content, how responsible are you for it when it comes to 'containing' it (should AI software be organized in such a way that deleting content is the default behavior, and saving requires a deliberate user action)?
 
Back