YouTube reportedly using technology that removes extremist content automatically

By midian182 ยท 4 replies
Jun 27, 2016
Post New Reply
  1. In the fight against online extremism, most companies rely on their users to flag up inappropriate content, which is then reviewed and usually deleted by employees. But according to Reuters, big internet firms such as Facebook and Google have secretly introduced an automated process for removing extremist material.

    It's claimed the process uses technology that was originially developed to identify and remove copyrighted material. It works by comparing “hashes” - unique digital identifiers that companies automatically assign to specific videos - against a database of previously banned content. A similar technique has already been used to detect online images of child abuse.

    It isn’t clear what level of automation the system can operate at, or exactly how the banned content database is put together. Reuters said the companies that use the process are reluctant to discuss it due to concerns that terrorists may learn to manipulate the system.

    “There's no upside in these companies talking about it,” Matthew Prince, chief executive of content distribution company CloudFlare, told Reuters. “Why would they brag about censorship?”

    While the process stops the reposting of content that has already been banned, it is unable to identify new extremist material. And, as is often the case when extremism is concerned, there’s the issue of whether it should be down to a company to decide what constitutes offensive content and what falls under free speech.

    “It’s a little bit different than copyright or child pornography, where things are very clearly illegal,” said Seamus Hughes, deputy director of George Washington University’s Program on Extremism.

    As terrorist organizations such as ISIS continue to use the web as an effective propaganda and recruitment tool, more companies are pushing back against the practice. Facebook and Twitter remove terrorist-related accounts as quickly as they are created, and Microsoft announced last month that it had officially banned all “terrorist content” from its consumer services, including Outlook, Xbox Live, and Docs.

    Permalink to story.

  2. Uncle Al

    Uncle Al TS Evangelist Posts: 3,347   +1,989

    I would have to say that anything that defuses the terrorists ability to communicate & share information is for the best, but I do hope they will strictly monitor the system to make sure it does not overly censor those that simply have non-popular ideas and sentiments.
  3. davislane1

    davislane1 TS Grand Inquisitor Posts: 4,737   +3,757

    Not going to work that way. Facebook has already implemented a similar system and it scrubs politically incorrect content just as much (if not more so) than actual terrorist propaganda.
  4. Lurker101

    Lurker101 TS Evangelist Posts: 820   +344

    Oh great. Another automated flagging system to interfere with content creators. I can't wait to see how this system gets abused.
  5. Yynxs

    Yynxs TS Addict Posts: 202   +70

    Its not just these two. I just had a bunch of email rejected as "spam" when they included Hillary, Benghazi, and or killed. I don't use any Google anything and definitely don't Facebook. The part I loved was, I was sending them to my wife across the room. It could be blamed on my spam filters, but I don't use them on the web, they're internal to my client.

    Considering that little Markie and all of Google are left wing demagogues extraordinaires, look for more of this censorship as time goes on and the election gets close.

    Now they have the excuse to censor discussion as "not my fault, it's automated". As the picture of Orwell says on dailyhaha, "Didn't you read my book?"
    SirChocula likes this.

Similar Topics

Add your comment to this article

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...