Facebook says it has invested $13 billion on safety and security over the past five years

nanoguy

Posts: 1,355   +27
Staff member
In brief: Facebook for years has faced intense criticism about the way it moderates content on its social platforms. After a series of reports earlier this month about worrying internal research results that Facebook allegedly ignored for years, the company is trying to protect its image by providing some missing context.

Earlier this month, a series of scathing reports from The Wall Street Journal revealed that Facebook knows its social platforms have mechanisms that can cause harm. While sifting through leaked internal documents, the publication found the company has been giving special treatment to a supposed elite, playing down Instagram's harmful effects on teenage girls, and making costly mistakes in trying to bring users together and boost positive interactions between them.

During an interview with Axios' Mike Allen, Nick Clegg -- who is Facebook's vice president of global affairs -- said the reports left no benefit of the doubt for the company and frames a series of complex and difficult-to-solve issues as a conspiracy.

Clegg also wrote a direct response to The Wall Street Journal's revelations where he described the series as full of "deliberate mischaracterizations" of what the company has been doing in light of internal research that shows negative aspects of its social platforms.

Today, Facebook sought to clarify that it has always wanted to innovate responsibly and that it has made progress in addressing major challenges over the past few years. For context, the company claims it has invested over $13 billion in safety and security measures since 2016. Five years later, more than 40,000 Facebook employees are dedicated to this area alone.

The safety and security teams include outside contractors that focus on content moderation, 5,000 of which were added in the past two years. They are aided by advanced AI systems that understand the same concept in multiple languages and can now remove 15 times more harmful content than they could in 2017.

Overall, Facebook is trying to show that it's been a lot more proactive in dealing with safety and security challenges early in the product development process. The company notes it has removed over three billion fake accounts and 20 million pieces of Covid-19 misinformation in the first half of this year, in addition to implementing time management features that remind you to take a break from using the Facebook and Instagram apps.

Permalink to story.

 
If they spent 13 billion on security and "privacy", then they probably spent 10 times more to try and extract and scrap whatever information they can for marketing/ advertising purpose. "Privacy" here sounds more like making sure others can't get the information, other than them.
 
Safety and security for Zuckerberg? Certainly not for the billions of people on our planet that have had the personal lives scooped by for resale (sorry - sharing of data) to advertisers.

Thinking the same thing. Like 99% protecting him, and a million maybe on his team for actual Facebook security.
2 guys who spend far too long with a waffle maker, or eating blocks of cheese.
 
Following a week of Wall Street Journal disclosures, Facebook said that it had spent $13 billion since 2016 on "user safety and security" with a dedicated 40,000-strong team.
The Wall Street Journal leaks, according to the business, omitted key context regarding complex issues, and the firm released the statistics to demonstrate how it handled the concerns on Facebook and Instagram.
 
Fakebook has told so many lies and half-truths since itś inception one cannot be blamed for not believing a word they say. The sooner they fall under Federal scrutiny the better off everyone will be and removing the shield the government implemented for all IT companies is paramount ......
 
Back