Why it matters: The scourge of terrorism has long been a problem on social media sites. And while many companies have voluntarily implemented their own solutions that address the problem, the European Union has decided the likes of Facebook and Twitter should be forced to remove extremist content—and face the consequences if they don’t act quickly enough.
Industry leaders have been battling the promotion of terrorist content for years, often using a combination of AI, machine learning, human moderators, and shared databases. Several joint initiatives have been formed to counter online extremism, including the Global Internet Forum to Counter Terrorism, the EU Internet Forum, and the Shared Industry Hash Database.
But The Financial Times reports that the EU now feels that platforms self-policing themselves isn’t enough, and that it would "take stronger action in order to better protect our citizens," said Julian King, the EU's commissioner for security.
Last March, the Commission published recommendations for increased protection against illegal content online. It included the one-hour rule, which stated that as terrorist content is most harmful in the first hours of its appearance, all companies should remove it within the initial 60 minutes of it being posted as a general rule.
Now, King says it is “likely” that this voluntary rule will become a mandatory requirement as the EU has “not seen enough progress” when it comes to removing terrorist material. Failure to adhere will see social media firms and websites face fines.
It will take a while before the legislation comes into effect, assuming the European Parliament and a majority of EU member states approve it. If the one-hour rule does become mandatory, it will be interesting to see how the EU enforces it and the size of the fines it hands out.