Microsoft urges congress to enact laws against deepfake misuse

midian182

Posts: 10,642   +142
Staff member
In brief: As AI-generated deepfakes become more realistic and convincing every day, Microsoft has called on Congress to pass laws protecting against their use for election manipulation, crimes, and abuse. The plea comes just a few weeks after the US Senate introduced a bill to create legal framework for ethical AI development.

Brad Smith, Microsoft's President and Vice Chairman, writes that while the tech sector and non-profit groups have taken steps to address the problems posed by deepfakes, especially when used for fraud, abuse, and manipulation against kids and seniors, laws need to evolve to combat these issues.

Smith urged the US to pass a comprehensive deepfake fraud statute that will give lawmakers a legal framework to prosecute criminals who use the technology to steal from everyday Americans. Smith also wants lawmakers to update federal and state laws on child sexual exploitation, abuse, and non-consensual intimate imagery to include AI-generated content.

Microsoft also wants Congress to require AI system providers to use tools that label synthetic content, which it says is essential to build trust in the information ecosystem.

Earlier this month, the US Senate introduced new legislation called the "Content Origin Protection and Integrity from Edited and Deepfaked Media Act" (COPIED Act). The act is designed to outlaw the unethical use of AI-generated content and deepfake technology and allows victims of sexually explicit deepfakes to sue their creators.

Although they have been around for years, the advancing tech behind deepfakes has brought them into the spotlight recently. The explicit images of Taylor Swift shared online in January, seen by over 27 million people on X alone, led to lawmakers calling for changes.

In the UK, explicit deepfakes were made illegal under the Online Safety Act in October 2023. PornHub, meanwhile, has banned deepfakes since 2018.

The political implications of deepfakes have proven to be a warranted concern during this election year. The phone calls using a cloned version of Joe Biden's voice in January led to the President calling for AI voice impersonations to be banned. Elsewhere, Microsoft has issued several warnings about China using generative AI to try to influence the election.

This week saw Elon Musk repost a digitally altered video of Kamala Harris, where her deepfaked voice says she is the "ultimate diversity hire" and that President Biden is senile. Many say the video violates X's own rules on posting manipulated content.

Permalink to story:

 
They ONLY care about their margins.
Tens of thousands of developers lost their jobs because Github, which was free to use, was bought by Micro-S@t@n. NOT because of the kindness in their hearts, but because they were able to train their AI on the back of the hard work of million others.
ALL the corporations are the same. NONE of the corporations are your friend.
I'd rather trust french kissing a royal cobra than Microsoft/Google/Apple/Amazon/whatever.
 
One thing I expect from all credible platforms is an automatic label for each image that represents a real face with AI manipulations.
I personally want as few as possible regulations for smaller startups to have a chance to create something amazing.
But telling a viewer they are seeing a fake image is a must.
 
While it's a good idea one has to wonder what our legislatures are doing besides playing political games. THEY are the ones that are supposed to be on the cutting edge of this sort of legislation .... time to vote the bums out, regardless of party.
 
Back