The European Union's top court says Facebook can be ordered by lower courts to remove illegal or defamatory posts from the platform. This means an individual country can ask the social giant to police and take down content, and that extends to situations where the content was posted by someone outside the EU.
The landmark ruling was made in a case brought by Austrian politician Eva Glawischnig-Piesczek, who demanded that Facebook remove a post that had the potential to harm her reputation. The post was publicly visible to any Facebook user, and associated an image of the official with an insulting comment on an article about minimum wage for refugees.
The decision sets a precedent for platform owners like Google and Facebook, and is indicative of EU's general approach to regulating tech giants. The bloc's Court of Justice is trying to clarify the reach that courts have with these companies when it comes to moderating content on their platforms, especially since many consider the internet to be "borderless."
Under the "Directive on electronic commerce," companies like Facebook aren't liable for illegal content "if it has no knowledge of its illegal nature or if it acts expeditiously to remove or to disable access to that information as soon as it becomes aware of it."
Today's ruling says "EU law does not preclude a host provider like Facebook from being ordered to remove identical and, in certain circumstances, equivalent comments previously declared to be illegal. In addition, EU law does not preclude such an injunction from producing effects worldwide, within the framework of the relevant international law."
This essentially means that while social giants aren't liable for content posted on their platforms, this doesn't mean a court can't order them to take down content and make it inaccessible worldwide. On the other hand, the European Court of Justice recently ruled that Google doesn't need to honor "right to be forgotten" requests made outside the EU.
Facebook naturally opposed the ruling, arguing that it "raises critical questions around freedom of expression and the role that internet companies should play in monitoring, interpreting and removing speech that might be illegal in any particular country." The company fears that this could easily lead to the obligation of proactively monitoring content in the future, which would be a costly proposition even for Big Tech.
Starting in 2015, Google, Facebook, and Twitter have all agreed to delete hate speech from their platforms within 24 hours. However, the overall trend is for EU officials to slowly overhaul regulation and increase the liability of companies that operate social media platforms. And since the US is also starting to question the way tech giants wield their power, it might take a similar stance to the EU.