Why it matters: Project Zero is the terrifying security research team infamous for three things: discovering the worst vulnerabilities out there, finding a new vulnerability every day, and giving companies only 90 days to find a fix before a full public reveal. Equally admired and hated by most of the security community, they've recently broken their silence to defend their counterintuitive policies and explain what they really do.

Every major tech company, from Microsoft to Apple to Intel, has received a bug report from Project Zero containing the following statement: "This bug is subject to a 90-day disclosure deadline. After 90 days elapse or a patch has been made broadly available (whichever is earlier), the bug report will become visible to the public." From then on, the company can choose to fix the bug with Project Zero's assistance, by themselves, or not at all - in which case the bug report is published immediately.

Each bug report contains almost everything Project Zero can collect on the vulnerability, from how it was first found to proof-of-concept code that exploits it to demonstrate the issue.

As of July 30, Project Zero has published the bug reports of 1,585 fixed vulnerabilities and 66 unfixed ones. 1,411 of the 1,585 were published within 90 days and an additional 174 were issued within a 14-day grace period Project Zero permits when they believe the company is close to completing a fix. Only two exceeded that, Specter & Meltdown, and task_t, both of which, when exploited, enabled programs access to the operating system's highest secrets.

Project Zero acknowledges that releasing the bug report prior to a fix is somewhat harmful, but that's the point: it scares companies into actually fixing it, which they say they wouldn't do if they expected the bug report to remain hidden.

"If you assume that only the vendor and the reporter have knowledge of the vulnerability, then the issue can be fixed without urgency. However, we increasingly have evidence that attackers are finding (or acquiring) many of the same vulnerabilities that defensive security researchers are reporting. We can't know for sure when a security bug we have reported has previously been found by an attacker, but we know that it happens regularly enough to factor into our disclosure policy.

Essentially, disclosure deadlines are a way for security researchers to set expectations and provide a clear incentive for vendors and open source projects to improve their vulnerability remediation efforts. We tried to calibrate our disclosure timeframes to be ambitious, fair, and realistically achievable."

Project Zero has a clear line of evidence for this. One study analyzed more than 4,300 vulnerabilities and found that 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For Android, for example, 14% of vulnerabilities are rediscovered within 60 days and 20% within 90, for Chrome there's 13% rediscovery within 60 days. This suggests that although a security researcher might be ahead of the curve, there's a reasonable chance that whatever they discover will be found by attackers soon after.

But isn't it dangerous publishing a bug report before a patch?

"The answer is counterintuitive at first: disclosing a small number of unfixed vulnerabilities doesn't meaningfully increase or decrease attacker capability. Our 'deadline-based' disclosures have a neutral short-term effect on attacker capability.

We certainly know that there are groups and individuals that are waiting to use public attacks to harm users (like exploit kit authors), but we also know that the cost of turning a typical Project Zero vulnerability report into a practical real-world attack is non-trivial."

Project Zero doesn't publish a step by step guide hacking guide, they publish what they describe as "only one part of an exploit chain." In theory, an attacker would require significant resources and skills to turn these vulnerabilities into a reliable exploit, and Project Zero argues that an attacker capable of this could have done so even if they hadn't exposed the bug. Perhaps attackers are just too lazy to start by themselves because as a 2017 study found, the median time from vulnerability to "fully functioning exploit" is 22 days.

That's just one issue, it's a big one, but most companies squeeze within the 90 days anyway. The second criticism many researchers have is Project Zero's policy of publishing the bug report after a patch is issued, mainly because patches tend to be imperfect, and because the same vulnerability is liable to crop up in other locations. Project Zero believes this is advantageous for defenders, enabling them to better understand vulnerabilities, and of little consequence to attackers who would be able to reverse-engineer the exploit from the patch anyway.

"Attackers have a clear incentive to spend time analyzing security patches in order to learn about vulnerabilities (both through source code review and binary reverse engineering), and they'll quickly establish the full details even if the vendor and researcher attempt to withhold technical data.

Since the utility of information about vulnerabilities is very different for defenders vs attackers, we don't expect that defenders can typically afford to do the same depth of analysis as attackers.

The information that we release can commonly be used by defenders to immediately improve defenses, testing the accuracy of bug fixes, and can always be used to make informed decisions about patch adoption or short-term mitigations."

Sometimes, in war, risks must be taken to achieve overall success. And make no mistake, the battle between security researchers and hackers is real, with serious, real-life implications. Thus far Project Zero have operated successfully with no significant consequences of their aggressive policy, and they'll no doubt continue in a similar fashion unless it causes a drastic issue. Let's hope that doesn't happen.