With Facebook now being used by almost 2 billion people around the world, the effect it has on society is at an all-time high. Yesterday, the social network announced plans to be more open about how it deals with some of the more controversial issues it faces, such as fake news, how it uses people's data, and online terrorism.

The seven "Hard questions" Facebook asks are:

  • How should platforms approach keeping terrorists from spreading propaganda online?
  • After a person dies, what should happen to their online identity?
  • How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what's controversial, especially in a global community with a multitude of cultural norms?
  • Who gets to define what's false news --- and what's simply controversial political speech?
  • Is social media good for democracy?
  • How can we use data for everyone's benefit, without undermining people's trust?
  • How should young internet users be introduced to new ways to express themselves in a safe environment?

The first in the series of blog posts examines how Facebook is dealing with the terrorist material appearing on its site - a question that has once again been raised in the wake of the London and Manchester attacks.

While its long been accused of not doing enough to deal with the problem, Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, wrote: "We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny."

The company said it has a team of over 150 people "exclusively or primarily focused on countering terrorism."

Facebook has recently started using artificial intelligence to stop the spread of extremist material. The AI is used in areas such as image matching, which compares uploaded photos and videos to content defined as terrorist, and text analyzing, which can detect posts praising or supporting terrorism. Facebook wants to expand these methods to other apps it owns, including Instagram and WhatsApp.

Mark Zuckerberg's company is also partnering with others in its fight against terror. It has joined with Microsoft, Twitter, and YouTube for a shared database of unique digital fingerprints, and works alongside government and inter-government agencies around the world.

Facebook hasn't been very open about how it deals with these issues in the past; Bickert said the recent attacks are part of the reason why it has now chosen to do so. "We're talking about this because we are seeing this technology really start to become an important part of how we try to find this content," she said.