US lawmakers weigh-in on deepfakes after explicit Taylor Swift images are shared online

Bubbajim

Posts: 736   +694
Staff
TL;DR: World-famous pop star Taylor Swift has become the latest victim of deepfake pornography this week, after AI-generated images of her were shared tens of millions of times on social media platforms. With that, deepfakes are back in the legislative consciousness. Congressional representatives and even the White House have now weighed in on the matter.

Explicit images of singer-songwriter Taylor Swift, 34, were shared on X this week, garnering over 27 million views and 260,000 "likes" before the account that posted the images was shut down. That did little to stop the spread though, as the images have continued to circulate and have reportedly been viewed over 40 million times.

Responding to the incident, X has been actively removing the images and has disabled searches for Taylor Swift on the platform to try and contain the spread. In a statement, it said, "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."

But on-platform moderation may not be enough. Now, members of the US Congress and even the White House have weighed in on the issue. US representative Joe Morelle has said that deepfake images "can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted."

Democratic representative Yvette Clark said, "what's happened to Taylor Swift is nothing new. For years, women have been targets of deepfakes [without] their consent. And [with] advancements in AI, creating deepfakes is easier & cheaper."

On Friday, White House Press Secretary, Karine Jean-Pierre, called the images "alarming" and said in a statement, "While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people."

As stated by Representative Clark, this is not a new issue. But with such a high-profile target, the problem has cut through the public discourse and now may be a possible target for future legislation.

In the UK, explicit deepfakes were made illegal under the Online Safety Act in October 2023. PornHub, a major online provider of adult media, has banned deepfakes on their platform since 2018.

Ms. Swift has yet to comment publicly on the incident.

Whether or not this latest high-profile incident leads to legislative changes, it's clear that AI content is already causing issues for law-makers. Just this week we reported on the first known instance of AI-generated messaging being used to suppress voter turnout, after a fake President Biden called New Hampshire residents and urged them not to vote.

Permalink to story.

 
When our technology develops a million times faster than our sense of morale, we are guaranteed to go to hell, full stop, just a matter of time. The way things are going, social collapse seems more and more inevitable. Uncontrollable birth rate, uncontrollable advancement of technology, politics driven by convenience over common sense, further disproportion of wealth distribution, skyrocketing unemployment, homelessness, crime.
 
Last edited:
On Friday, White House Press Secretary, Karine Jean-Pierre, called the images "alarming" and said in a statement, "While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people."

I cringe whenever someone talks about the spread of “misinformation” and uses issues like these to further their agenda.

Who decides what is misinformation? Who decides whether a statement is factual and what “facts” are they using to support their claims (and are they truly facts or gaslighting)? Also, what are the biases of the people making the decisions?

There’s a slippery slope whenever we talk about censorship and content moderation, especially when it involves political discourse. Yes, the article is talking about deepfakes and pornography, but it’s obvious from KJP’s comments that the argument is being extended beyond this single example.
 
Who decides what is misinformation? Who decides whether a statement is factual and what “facts” are they using to support their claims (and are they truly facts or gaslighting)? Also, what are the biases of the people making the decisions?

An intelligent society with experts, those with intimate familiarity with a topic and conscientious people get to decide that. That's how it works. And claiming that we can never know anything, ever, so don't decide what the truth is the biggest forms of gaslighting there is. This is how propagandists in totalitarian countries gained a foothold and establish their false truths. First you argue that truth and reality is such a hazy, confusing thing because it's all subjective. Then you introduce murder, mayhem and distorted points of view, based on the logic that no one has the right to object to or contradict anything because all truth is either subjective or unknowable.

There’s a slippery slope whenever we talk about censorship and content moderation, especially when it involves political discourse. Yes, the article is talking about deepfakes and pornography, but it’s obvious from KJP’s comments that the argument is being extended beyond this single example.

A slippery slope is an illogical fallacy. Not only that, there could never been such a thing as a slippery slope in a democracy, because everything can be reversed. We could never reach a point where if we put X in motion, it will be a done deal and there will be nothing we can do about it. That's not how a rational, thinking democracy works.
 
...
A slippery slope is an illogical fallacy.
...

If you pay any attention to politics and politicians, it isn't a fallacy, it's almost an inevitability. Governments don't (willingly) give up power once it's gained. They only ever seek to acquire more. That's why you have to be very careful about what powers they are allowed to have, and the ways they are allowed to use them.
Also, fun fact, you can't have a totalitarian country unless the current ruler has absolute authority and power. A benevolent dictatorship (our truth is the only truth) might be the best form of government on paper, but we all know it doesn't work out that way.

For everyone that thinks porn isn't a problem this is just another in a long line of proofs that it is.

You seem to have wandered into the wrong topic section. The issue of deep faking and its consequences is entirely divorced from the topic of porn and its consequences.
 
An intelligent society with experts, those with intimate familiarity with a topic and conscientious people get to decide that.
So to put it more simply, you prefer to let someone else decide what content and facts you are allowed to see rather than use your own critical thinking skills?

I'm sorry, but I strongly disagree with that premise. I also think it's the height of arrogance to assume that a panel of "experts" needs to "filter" information for the masses. I prefer to use my own judgment and research using the raw data.

First you argue that truth and reality is such a hazy, confusing thing because it's all subjective.
Just to be clear, I never argued that truth is subjective. But information most certainly can be redacted, distorted, or withheld altogether making the truth difficult to discern. That's the concern with censorship - you're relying on someone else's presentation of the "facts" and assuming/trusting that they have no agenda or bias in the process.

A slippery slope is an illogical fallacy.
How?

Not only that, there could never been such a thing as a slippery slope in a democracy, because everything can be reversed. We could never reach a point where if we put X in motion, it will be a done deal and there will be nothing we can do about it.
As jonny888 pointed out, governments (and people) have a nasty habit of not giving back power once they acquire it.

Also, I cringe a bit at your statements that use "everything" and "never", which are themselves fallacious. Beyond that, it would be naive to assume that our freedoms are irrevocable and can be taken for granted.
 
AI and deepfake will be a losing battle like the drug war and legislative actions will not work. In the future I can see paying royalties to deepfake usage. This would surely open a box of snakes, not a can of worms.

But then we have cities handing out clean syringes and crack pipes to drug users, opening injection sites to get your fix. But then some cities are backing down on this.
 
Last edited:
If you pay any attention to politics and politicians, it isn't a fallacy.

There's a discipline called Logic where there are formal and informal fallacies. A slippery slope is an informal fallacy, because it's arguing that someone should be against something because it will lead to the worst case scenario. It's a fallacy, because there's no logical basis to assume that anything could lead to the worst case scenario, and a person can make that argument for anything. For example: "Don't practice organ transplants, because it will lead to organ harvesting and human trafficking." Or, "Don't prosecute child rapists, because it could lead to people in May to December relationships being accused of child abuse."

It's almost an inevitability. Governments don't (willingly) give up power once it's gained. They only ever seek to acquire more.

That's not how American democracy works. People actively lobby for laws, the laws get passed, and there's a system of checks and balances in which laws can be vetoed or overruled. There is no "government" in the sense that you're trying to make it.
 
There's a discipline called Logic where there are formal and informal fallacies. A slippery slope is an informal fallacy, because it's arguing that someone should be against something because it will lead to the worst case scenario

I'm well aware of it. On paper, it's correct. I'm not going to say everything always leads to the worst outcome. Reality and historical observation show that, while not certain to slip, things often do. Thus, approach with caution. In this case: Don't blindly give up power to people, especially if it's the power to think for yourself, and especially to people with a strong track record of only serving their self interests, not yours.

That's not how American democracy works. People actively lobby for laws, the laws get passed, and there's a system of checks and balances in which laws can be vetoed or overruled. There is no "government" in the sense that you're trying to make it.

If you're trying to argue that the American legal system is perfect, that all government officials and lobbyists are benevolent, non-corrupt entities, and that given the chance they would never filter/bias/alter/censor information to benefit them and their position, then I don't think we live on the same planet.
 
TL;DR: Nothing is true, everything is permitted. We'll have to get used to that.

For those of you in the debate above, the fact is, you're both right, and you're both wrong.

At the end of the day, there are risks with moderation, there are risks with free speech, and there are risk with not changing the status quo. Not just risks, but costs. More than that, there's not a clear "right" answer, because any imaginary right answer is not easy to implement correctly in practice (and as we see in the above debate, what is the definition of "correct"?).

What we are left with is, frankly, where we are now. The current precedent is that porn and parody are generally permitted, and I see no reason why deepfakes would be exempt from such precedent - unless they are also used in conjunction with personal harassment. But it will ultimately come down to the courts and a (generally indecisive) Congress to decide that.

Different states will have different opinions, different courts will have different opinions (with district courts tending towards the hyperbolic until they are reigned in - for better or for worse - as part of the appeals process), but ultimately the precedent is on the side of free speech - which means little interference from the government and private platforms can make their own policies as to what is allowed.

And this means a society in which we have to accept deepfakes as part of our reality. But even if the law were against deepfakes, malicious actors would still create them. And thus we only have one singular question to answer: how do you distinguish between the real and the fake?

With technology as it is, I would argue that there will soon be no distinction between the two, and thus, we'll need a society that can cope with that reality. Society and lawmakers will ultimately need to adapt to the idea that combating fake content isn't the solution. Creating a functioning and thriving society in spite of fake information, with people able to continue their lives regardless of what is generated about them, will have to be the solution.

In other words: if porn is made about you on the internet for everyone to see: So what, nobody cares, your employer doesn't care, your family doesn't care, that random person on the web sees so much content that they don't really care, and life goes on. Same for politics. Same for everything else. People will have to learn to focus on the broader narrative, the patterns, not the instant gratification post or immediate video.

Why? Once AI can generate imagery, sound, and so forth that cannot be distinguished from real content simply by examining it (that is to say, impervious to watermarking or pattern detection), with VR/AR experiences on the rise, and with no physical law preventing the emulation of any real thing at a super-quantum level (no cloning theorem doesn't work on our scale, at least not for this purpose), it's only a matter of time before one cannot tell if you took that photo of a real place with your phone or if it was generated. So, legal or not, we'll have to deal with is reality eventually.

"Our creed does not command us to be free, it commands us to be wise."

And so, we either must be free, or we must be wise, as a society. I'm not sure we can be both.
 
Last edited:
I'm well aware of it. On paper, it's correct. I'm not going to say everything always leads to the worst outcome. Reality and historical observation show that, while not certain to slip, things often do.
A slippery slope is making the argument that we should be against something because it will lead the worst possible outcome. You can't state point blank that you're making a case that something will be a slippery slope and then say you weren't saying it would lead to the worst possible outcome. The reason is that this is exactly what it means to argue a slippery slope. It's to say that we can't be for something because it will be like standing on a slope so slippery that we'll fall off a cliff.

Thus, approach with caution. In this case: Don't blindly give up power to people, especially if it's the power to think for yourself, and especially to people with a strong track record of only serving their self interests, not yours.

There's no such thing as blindly giving power to anyone, because the way that the US political system works is that if a law is passed and it becomes unpopular, it can be reversed, amended with a better version of the law, or overruled by the Supreme Court if someone takes the issue to court.

If you're trying to argue that the American legal system is perfect, that all government officials and lobbyists are benevolent, non-corrupt entities, and that given the chance they would never filter/bias/alter/censor information to benefit them and their position, then I don't think we live on the same planet.

That's not what I'm arguing at all. What I'm arguing is that the beauty of living in a Western democracy--as opposed to North Korea or Russia--is that if it turns out that government officials and lobbyists are evil, corrupt entities, we can vote in officials who aren't and form lobbies to counter the bad lobbies. This is why a slippery slope--informal fallacy aside--could never happen here and why there's no basis to the argument that, "We should never pass any laws that could reign in bad behavior EVER, because it will lead to a slippery slope."
 
And this means a society in which we have to accept deepfakes as part of our reality. But even if the law were against deepfakes, malicious actors would still create them. And thus we only have one singular question to answer: how do you distinguish between the real and the fake?

With technology as it is, I would argue that there will soon be no distinction between the two, and thus, we'll need a society that can cope with that reality. Society and lawmakers will ultimately need to adapt to the idea that combating fake content isn't the solution. Creating a functioning and thriving society in spite of fake information, with people able to continue their lives regardless of what is generated about them, will have to be the solution.

In other words: if porn is made about you on the internet for everyone to see? So what, nobody cares, your employer doesn't care, your family doesn't care, that random person on the web sees so much content that they don't really care, and life goes on. Same for politics. Same for everything else. People will have to learn to focus on the broader narrative, the patterns, not the instant gratification post or immediate video.

Why? Once AI can generate imagery, sound, and so forth that cannot be distinguished from real content simply by examining it (that is to say, impervious to watermarking or pattern detection), with VR/AR experiences on the rise, and with no physical law preventing the emulation of any real thing at a super-quantum level (no cloning theorem doesn't work on our scale, at least not for this purpose), it's only a matter of time before one cannot tell if you took that photo of a real place with your phone or if it was generated. So, legal or not, we'll have to deal with is reality eventually.
So, this position is what's known as Techno-Fascism, the idea that society must adapt to technology as the new norm, even when it starts violating people's rights, destroying the very fabric of society and undermining the very democratic and humanitarian principles that a country was founded on. The reason why everyone must adapt to it is that just like Communism, Fascism or any other totalitarian ideology, it's one that everyone must accept as a fact of life.
 
So to put it more simply, you prefer to let someone else decide what content and facts you are allowed to see rather than use your own critical thinking skills?

Do you realy think that most people have "critical thinking skills" and also knowledge to make a right judment about what is fake information and what is a fact? ... There's people that believe the flat earth "theory"...
 
Do you realy think that most people have "critical thinking skills" and also knowledge to make a right judment about what is fake information and what is a fact? ... There's people that believe the flat earth "theory"...
There are certainly people who lack critical thinking skills. But who gets to make that determination? Politicians? PhD’s? You and I? I’m not willing to cede my freedom for free thought under the misguided hubris that the rest of mankind isn’t capable of the same.
 
So, this position is what's known as Techno-Fascism, the idea that society must adapt to technology as the new norm, even when it starts violating people's rights, destroying the very fabric of society and undermining the very democratic and humanitarian principles that a country was founded on. The reason why everyone must adapt to it is that just like Communism, Fascism or any other totalitarian ideology, it's one that everyone must accept as a fact of life.
Everything I dont like is fascism: a beginners guide to internet politics.

Human7 isnt wrong. You can cry about how evil technology is all you want, new tech will continue to exist and be used. Short of banning said tech and enforcing that ban with the very fascist tendencies you oppose, you cant stop it. Changing the law and changing popular culture to ignore deepfakes and the constant barrage of "content" is more realistic. The law should be set up to protect people from false accusations and rabid cancel culture, because as others have pointed out, technology WILL be misused and abused, no matter what, and you cannot trust the system to consistently work in your best interests.

Also, your definition of "Techno-Fascism" is completely wrong. Techno-fascism is a concept where an authoritarian rule is executed by technocrats, IE a society run by google/facebook/microsoft. What human7 is proposing is literally the opposite of a techno-fascist society. How do you manage to get that backwards?
There are certainly people who lack critical thinking skills. But who gets to make that determination? Politicians? PhD’s? You and I? I’m not willing to cede my freedom for free thought under the misguided hubris that the rest of mankind isn’t capable of the same.
"Those who encourage eugenics would be grabbed by their own rules". the people who say that people shouldnt be allowed to think for themselves are usually NPC levels of independent.
 
Everything I dont like is fascism: a beginners guide to internet politics.

I didn't study internet politics. I studied political science and philosophy in college.

Human7 isnt wrong. You can cry about how evil technology is all you want, new tech will continue to exist and be used.

Someones having a debate with an argument they've been having in their heads for a long time. Nobody said technology was evil. Nobody was crying about it, either. We're having a discussion about something specific--what to do about deep fakes and pornography using images of people against their will. Stick to the discussion at hand.
 
Last edited:
"But on-platform moderation may not be enough. Now, members of the US Congress and even the White House have weighed in on the issue. US representative Joe Morelle has said that deepfake images "can cause irrevocable emotional, financial, and reputational harm – and unfortunately, women are disproportionately impacted."

Right. It's always women most affected no matter what it is. There's plenty of the same for men too, but who cares, it's just men.



I didn't study internet politics. I studied political science and philosophy in college.

Get a refund.
 
TL;DR: Nothing is true, everything is permitted. We'll have to get used to that.

For those of you in the debate above, the fact is, you're both right, and you're both wrong.

At the end of the day, there are risks with moderation, there are risks with free speech, and there are risk with not changing the status quo. Not just risks, but costs. More than that, there's not a clear "right" answer, because any imaginary right answer is not easy to implement correctly in practice (and as we see in the above debate, what is the definition of "correct"?).

What we are left with is, frankly, where we are now. The current precedent is that porn and parody are generally permitted, and I see no reason why deepfakes would be exempt from such precedent - unless they are also used in conjunction with personal harassment. But it will ultimately come down to the courts and a (generally indecisive Congress) to decide that.

Different states will have different opinions, different courts will have different opinions (with district courts tending towards the hyperbolic until they are reigned in - for better or for worse - as part of the appeals process), but ultimately the precedent is on the side of free speech - which means little interference from the government and that means private platforms can make their own policies as to what is allowed.

And this means a society in which we have to accept deepfakes as part of our reality. But even if the law were against deepfakes, malicious actors would still create them. And thus we only have one singular question to answer: how do you distinguish between the real and the fake?

With technology as it is, I would argue that there will soon be no distinction between the two, and thus, we'll need a society that can cope with that reality. Society and lawmakers will ultimately need to adapt to the idea that combating fake content isn't the solution. Creating a functioning and thriving society in spite of fake information, with people able to continue their lives regardless of what is generated about them, will have to be the solution.

In other words: if porn is made about you on the internet for everyone to see? So what, nobody cares, your employer doesn't care, your family doesn't care, that random person on the web sees so much content that they don't really care, and life goes on. Same for politics. Same for everything else. People will have to learn to focus on the broader narrative, the patterns, not the instant gratification post or immediate video.

Why? Once AI can generate imagery, sound, and so forth that cannot be distinguished from real content simply by examining it (that is to say, impervious to watermarking or pattern detection), with VR/AR experiences on the rise, and with no physical law preventing the emulation of any real thing at a super-quantum level (no cloning theorem doesn't work on our scale, at least not for this purpose), it's only a matter of time before one cannot tell if you took that photo of a real place with your phone or if it was generated. So, legal or not, we'll have to deal with is reality eventually.

"Our creed does not command us to be free, it commands us to be wise."

And so, we either must be free, or we must be wise, as a society. I'm not sure we can be both.
For me the answer of the topic would be to allow deep fakes, but under the rule that they must be explicitly declared as fakes, and not genuine. This way people can have fun with it, but not infringe upon fraud or deformation.
 
Back