Former moderators sue TikTok over trauma caused by viewing "extremely disturbing" videos

midian182

Posts: 9,710   +121
Staff member
In brief: Not for the first time, former content moderators are suing TikTok over claims the company didn't do enough to support them as they watched extreme and graphic videos that included child sexual abuse, rape, torture, bestiality, beheadings, suicide, and murder.

Ashley Velez and Reece Young filed a class-action lawsuit against TikTok and parent company Bytedance, writes NPR. They worked through third-party contractors Telus International and New York-based Atrium.

The suit claims that TikTok and ByteDance violated California labor laws by failing to provide Velez and Young with adequate mental health support in a job that involved viewing "many acts of extreme and graphic violence." They also had to sit through hate speech and conspiracy theories that lawyers say had a negative impact on their mental well-being.

"We would see death and graphic, graphic pornography. I would see nude underage children every day," Velez said. "I would see people get shot in the face, and another video of a kid getting beaten made me cry for two hours straight."

The plaintiffs say they were allowed just two 15-minute breaks in their 12-hour workday and had to review videos for no longer than 25 seconds before deciding with more than 80% accuracy whether the content broke TikTok's rules. Moderators would often watch more than one video at once to meet quotas, the suit says, accusing TikTok of imposing high "productivity standards" on moderators.

Both plaintiffs say they had to pay for counseling out of their own money to deal with the psychological impact of the job. They also had to sign non-disclosure agreements that prevented them from discussing their work details.

The suit claims that TikTok and ByteDance made no effort to provide "appropriate ameliorative measures" to help workers deal with the extreme content they were exposed to.

In December, another TikTok moderator launched a similar class-action lawsuit against the company and Bytedance, but the case was dropped last month after the plaintiff was fired, writes NPR.

In 2018, a content moderator for Facebook contractor Pro Unlimited sued the social network after the "constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace" resulted in PTSD. Facebook settled the case for $52 million. There was also a YouTube mod who sued the Google-owned firm in 2020 after developing symptoms of PTSD and depression, a result of reviewing thousands of disturbing videos.

Permalink to story.

 
So, stumbling upon Rotten and Orgish way back in '96 was actually prep work to moderate social media sites, wonder if that'd work in an interview.

"You may have to see some pretty disturbing, violent, and graphic content...can you handle that?"
"I was browsing Rotten, Orgish, and Stileproject when they first started"
"Oh damn, hired!"

/not that I'd actually want to do that job
//could handle the gore and violence
 
I think a more important lesson here is that "other people" should not be moderating what people say.

Free Speech is chaotic.
Free Speech is offensive.
Free Speech is ugly.

You can't police the world - those crimes committed: murders, rapes, etc will probably never be prosecuted.

Everyone has a 4K camera in their pocket and they are anxious nowadays to broadcast and upload their latest crimes - especially terrorists and criminals who live beyond the reach of justice.

You have two choices:

#1 Accept that you are subjecting yourself to these graphic images

#2 Uninstall the app and move on with your life.

If you can't handle this job, then don't sign up. No amount of counseling is ever going to get those images out of your head.
 
So, stumbling upon Rotten and Orgish way back in '96 was actually prep work to moderate social media sites, wonder if that'd work in an interview.

"You may have to see some pretty disturbing, violent, and graphic content...can you handle that?"
"I was browsing Rotten, Orgish, and Stileproject when they first started"
"Oh damn, hired!"

/not that I'd actually want to do that job
//could handle the gore and violence

Best Gore was the best. And now it's gone for some reason. I'm searching for an option.

I prefer my media uncensored.
 
I'm curious as to what these people expected when they applied for their jobs as moderators... were they unaware of all of this before? And if so, WHY WRE THEY SO IGNORANT!

Very good point. I would only add, does the company give them warning and require them to acknowledge the level of disturbing images, etc. that they will be subjected to? Not saying it's not the individuals responsibility but it is also the companies as well ....
 
Best Gore was the best. And now it's gone for some reason. I'm searching for an option.

I prefer my media uncensored.

Wait, best gore is gone too?
Haven't checked in a few years now, pretty much became the place after rotten got all white power-y in their forums, Orgish became...whatever it is now after the domain ran out, and stile turned into sketchy bad amateur porn.

I get why sites like that only tend to make it a few years at most usually, most no one wants to see the dark side of human nature, just verbally acknowledge it like that does something to help bring awareness and 'fix' it.

Society is fine with violence and gore all over tv and movies like it's no thing, they even laugh at it at times. It's not all that far removed to see the results of the real thing though.

"Can you handle life?"
Indeed.
 
Considering the amount of child porn, snuff videos and other forms of debauchery coming through these "apps", it's amazing that they prosecute people for viewing or having these videos on their phones.

Whatsapp, WeChat and Discord for example automatically saves videos and pics to a phone and a bad player can easily post child porn, snuff videos or debauchery. They used to do it on Facebook all the time until the moderator pool grew.

I think we're going about this the wrong way: instead of taking people whose minds aren't prepared for this stuff and modding them, you should take people who regularly view this stuff and place them in specific departments to handle these specific types of media complaints.

 
The job title kinda tells you what to expect, you're a moderator for a free, public facing social media company where anything can be posted by anybody.

what the f*ck were they expecting? To take down the odd "cats fighting" video?

Edit: I do get they needed more mental support but come on, you don't sign up to a job like that if you're not mentally very strong and can deal with stuff like that daily.

It's a bit like a doctor, suing a hospital because he can't stand the sight of blood and the hospital didn't adequately help his PTSD of blood. What did the doctor expect at a hospital? No blood?
 
Last edited:
I would think a well developed content screening AI may be the way to go. Programmatic sorting may result in some acceptable content getting banned, but better to have a manned department that is checking requests to restore legit content. Just make it easy for posters to quickly get an accidental ban reviewed. There would be less negative impact on your content reviewers that way, though a risk to public perception of your service.

 
I would think a well developed content screening AI may be the way to go. Programmatic sorting may result in some acceptable content getting banned, but better to have a manned department that is checking requests to restore legit content. Just make it easy for posters to quickly get an accidental ban reviewed. There would be less negative impact on your content reviewers that way, though a risk to public perception of your service.


Please show me an AI that can tell the difference between porn and illegal porn?

Show me an AI that knows the difference between real violence and movie violence?

This is the reason human speech shouldn't be moderated.
 
Please show me an AI that can tell the difference between porn and illegal porn?

Show me an AI that knows the difference between real violence and movie violence?

This is the reason human speech shouldn't be moderated.
No... that's the reason it shouldn't be SOLELY moderated by an AI... yet... AI's will get better and better though, so eventually, that will probably be the answer.

Free speech can, has and always will be moderated in some way.... your freedom ends as soon as it affects someone else...
 
I would think a well developed content screening AI may be the way to go. Programmatic sorting may result in some acceptable content getting banned, but better to have a manned department that is checking requests to restore legit content. Just make it easy for posters to quickly get an accidental ban reviewed. There would be less negative impact on your content reviewers that way, though a risk to public perception of your service.

Nope. They just devote that kind of coding power to sniff out whatever the platform considers "mis-information".
 
Nope. They just devote that kind of coding power to sniff out whatever the platform considers "mis-information".
Yeah... like how Covid isn't a real pandemic, and that Trump really won the election, right? Don't you hate when people try to lie to you... and have the nerve to give evidence!?!?
 
Isn't there a way to track the uploaders of disturbing content? That would be a great deterrent against those degenerates.
 
Isn't there a way to track the uploaders of disturbing content? That would be a great deterrent against those degenerates.


How do you track uploaders of disturbing content who are either: a) using public WiFi, b) using a VPN on public WiFi, c) parked next to a library and "Wardriving" or d) using burner phones/tablets or laptops they can easily destroy?

Or how about e) in countries with no extradition treaty.

Or how about f) sanctioned by populace - ie. Ukrainian soldiers making Tik Toks of captured/injured/dead Russian soldiers
 
So, stumbling upon Rotten and Orgish way back in '96 was actually prep work to moderate social media sites, wonder if that'd work in an interview.

"You may have to see some pretty disturbing, violent, and graphic content...can you handle that?"
"I was browsing Rotten, Orgish, and Stileproject when they first started"
"Oh damn, hired!"

/not that I'd actually want to do that job
//could handle the gore and violence

Worked 911 for over 20 years, lots of ride alongs...I miss Ogrish!
 
Please show me an AI that can tell the difference between porn and illegal porn?

Show me an AI that knows the difference between real violence and movie violence?

This is the reason human speech shouldn't be moderated.
I suppose I had geared that comment more toward services that don't allow pornographic/extreme violence content at all.
 
Welcome to the reality of working. I am sure they were told the type of posts they would come across, hence the word "Moderator". If they accepted the job they have nothing to sue about and deserve nothing. I wonder how many of them have TikTok accounts and are living in a fantasy world? If you can't take the heat get out and let someone else have the job.
 
Last edited:
No amount of counseling or "support" can help someone withstand something they weren't built to withstand. Some jobs can;t be done by just anybody, even if the barriers for entry are virtually nonexistent.

This is the difference between people who leave a war with PTSD and people who leave and want to jump back in! Some guys are just built different.
 
This crap really exists?

OMG I am going to be a fire fighter and when I walk in a burning house and see a dead body I am going to sue the county....for subjecting me to a disturbing scene.

WTF is this world coming to.
 
Back