In brief: There are many methods out there for people to manipulate images and video, so Google is preparing for the worst by releasing tools that make it easier to spot doctored online content. Wether or not it's doing enough is a question that's on everyone's lips, but the company believes its efforts will kickstart the fight against misinformation campaigns.

As we get closer to the 2020 elections, tech giants are looking for different ways they can fight the spread of fake news, misinformation, and deep fake videos on big social and search platforms. For example, Facebook will outright ban deepfakes while allowing political ads to run wild.

In the case of Google, it has been preparing its defenses to tackle the waves of fake news. But through its Jigsaw subsidiary, the search giant wants to take a more proactive approach, which uses a recently released platform called Assembler to help fact-checkers and journalists verify images quickly, before they have a chance of spreading online.

The tools are free, and while they're admittedly part of an "early stage experimental platform," they're a good starting point that includes contributions from academics at the University of California, Berkeley, University Federico II of Naples, and the University of Maryland.

The way Assembler works is by combining several machine learning algorithms that are good at finding color and noise patterns, inconsistencies in noise patterns, and looking at the properties of pixels in various images.

In layman terms, it's really good at detecting the most common manipulation techniques used in images, such as playing with brightness or copying and pasting textures or objects to mask something or someone. It comes up with a score that represents the probability that a picture may have been doctored or otherwise altered in any way, similar to Adobe's About Face AI.

Another ambition of the project is to fine-tune tools that can spot deepfakes created using StyleGAN, an algorithm capable of producing convincing imaginary faces.

Manipulated images are relatively hard to verify, which is why even Adobe will at most promise to be able to check images against manipulation methods used in its own tools. This means that Jigsaw has to fight an uphill battle against a myriad of techniques and tools, and that doesn't even take deep fake videos into account.

Jigsaw CEO Jared Cohen explained in a blog post that the company "observed an evolution in how disinformation was being used to manipulate elections, wage war and disrupt civil society." This realization led to the decision of advancing the technologies used to stop these attempts.

As of writing, Assembler has seven different tools that journalists and other people can use to spot faked images. However, Jigsaw researchers Santiago Andrigo and Andrew Gully told The New York Times that Assembler is not a panacea, and as an ecosystem it will need to develop and improve quickly over time.

That's why Jigsaw also announced a web publication called "The Current," which will serve as a constant showcase of ongoing research in the fight to detect misinformation campaigns. Cohen says, "our main motivation was creating a place where people can access the work of many experts and independent researchers and organizations that are on the front lines of dealing with this problem."