What just happened? Pornhub, still reeling from the New York Times report that alleged the service was profiting from non-consensual videos, has announced a series of new security measures to identify illegal content and keep it off the site, including using a third-party company to verify users.

In December, a report by the New York Times' Nicholas Kristof highlighted the number of clips on Pornhub showing rape and the sexual abuse of underage girls, alleging that the company has monetized these videos.

In response, Pornhub stopped uploads---only content partners and those in its Model Program can upload content---and limited downloads to those who paid for them within the verified Model Program. The site removed millions of videos, and Mastercard and Visa cut ties with Pornhub, forcing it to use cryptocurrency for payments.

The company has now announced a "series of industry-leading safety and security policies," designed to combat and eradicate illegal videos.

In addition to using software to identify and remove images of child abuse, Pornhub and parent MindGeek are expanding their team of human moderators and introducing extra training. It has also launched a Trusted Flagger Program that consists of more than 40 leading non-profit organizations dedicated to child safety on the internet.

Uploads are still limited to studio partners and verified users who are part of the Model Program. Those who want to join need their identifications verified by London-based firm Yoti. Ars Technica reports that the process involves sending a current photo and ID that Pornhub and MindGeek never view. Once Yoti verifies the information, the data is encrypted---even the company itself cannot see it.