Vulnerability disclosure: CERT versus full disclosure

The vulnerability-disclosure conversation has been quietly maturing throughout 2000. The two main models — CERT's coordinated disclosure and Bugtraq's full disclosure — are converging in interesting ways. The convergence is not finished and produces real disagreements; the trajectory is informative.

Worth writing down where things now stand.

The two original positions

Full disclosure, as championed historically by Bugtraq and the broader security research community: when a researcher finds a vulnerability, they publish it openly with full technical detail, possibly including proof-of-concept exploit code. The argument: vendors will not patch unless forced to; operators cannot defend unless they know what is happening; secrecy preferentially benefits attackers, who have other channels for the information.

Coordinated disclosure, as championed by CERT/CC and various vendor-friendly organisations: when a researcher finds a vulnerability, they notify the vendor privately, give them time to develop a patch, and publish only after the patch is available. The argument: public disclosure of an unpatched vulnerability creates a window where attackers can exploit it; coordination minimises the window; patches deployed before exploitation is widespread reduce harm.

Both positions have a coherent rationale. Both have their honest advocates. The disagreement has been one of the defining cultural conflicts of the security community over the past decade.

What has shifted in 2000

Three shifts I have observed.

Researchers are more often choosing coordinated disclosure as a default. Five years ago, a serious researcher who found a serious bug would typically publish to Bugtraq. Today, the same researcher will more often notify the vendor first and follow CERT's coordinated process. The shift is not universal; full-disclosure researchers still exist and publish actively. The baseline has moved.

Vendors are responding faster. Microsoft's response time to a serious advisory is now days, not weeks. Cisco's is similar. Smaller vendors vary widely. The general trajectory is toward shorter vendor-response windows, which makes coordinated disclosure more practical.

Public-disclosure deadlines are emerging. A common pattern: researcher notifies vendor; vendor commits to a patch by date X; researcher publishes on date X regardless of whether the patch is ready. The deadline gives vendors a pressure to ship and gives researchers a clear path to disclosure if the vendor stalls. Several specific researchers (notably some of the L0pht alumni now at @stake) are operating this way.

Bug bounties are emerging. Some vendors are paying researchers for responsible disclosure. The economics shift the incentives — researchers can be compensated for the time investment, which makes coordinated disclosure more sustainable. Microsoft is not yet doing this seriously; some smaller vendors are.

What still divides the community

A few specific issues where consensus has not formed.

Disclosure timeline length. How much time should the vendor have between notification and public disclosure? The numbers I see proposed range from 30 days (aggressive) to 6 months (vendor-friendly). 90 days is becoming a rough industry default. There is no real consensus.

Exploit code in disclosure. Should the public disclosure include working exploit code? The argument for: defenders need it to test their patches and develop signatures. The argument against: it is exactly what attackers need to deploy exploitation at scale. Different researchers come down differently on this; the community is genuinely divided.

Forced disclosure when vendor refuses. What if a vendor refuses to patch, denies the bug, or stalls indefinitely? The full-disclosure community says: disclose anyway. The coordinated-disclosure community says: keep coordinating, even if it takes years. The middle ground is some kind of escalation — published deadline with explicit warning that disclosure will happen — but the application is uneven.

Information sharing under non-disclosure. Some vendors offer researchers detailed access under NDA. The information helps the researcher understand the issue better but constrains them from public discussion. Whether this is acceptable, and what conditions justify it, varies by community and by case.

The CERT model, examined

CERT/CC operates as a coordinator: researchers report bugs, CERT manages the disclosure process with the vendor, CERT publishes a public advisory once a patch is ready. In principle, this is a useful service. In practice, the model has issues:

CERT's queue is long. Reports submitted to CERT can take months to result in a published advisory. Researchers whose work waits in the queue have an incentive to publish independently.

CERT's resourcing is limited. As a non-profit with modest funding, CERT cannot scale to the volume of reports the modern security community produces. The bottleneck is real and produces frustration.

CERT's industry-friendly position is sometimes at odds with researchers. When a vendor stalls, CERT's institutional incentive is to maintain the relationship rather than to force disclosure. This produces tension with researchers who feel their reports are being mishandled.

None of this is a criticism of CERT specifically. It is a description of the structural challenges in any coordinated-disclosure model. CERT is doing the work; the work is hard.

My own current position

I am not, in this notebook, generally a publisher of new vulnerability research. The few small bugs I have found in software over the years I have reported through CERT-style coordinated processes; the response has typically been adequate. My position on the broader question is:

Coordinated disclosure as default. When I find a bug, I notify the vendor first. The notification includes a reasonable timeline (90 days unless the vendor needs longer for a specific reason). Public disclosure happens after the patch is shipped or after the timeline expires.

Full disclosure when the vendor is unresponsive. If the vendor stalls past the deadline without a credible reason, public disclosure is the right path. The threat of disclosure is what makes coordinated disclosure work.

Caution about exploit code. I tend toward not publishing working exploit code in initial disclosures. The technical detail to understand the bug should be public; the specific automation is more delicate. This is a judgement call and reasonable people disagree.

Acknowledgement of the trade-off. Coordinated disclosure favours operators who patch quickly and disadvantages operators who patch slowly. The unfairness is real; the alternative (full disclosure) has its own unfairness. Neither is perfect.

Where I think this goes

A few predictions, in roughly increasing order of confidence.

The 90-day default solidifies. Most disclosure events in 2001 will follow some variant of "90 days from notification to public disclosure". Specific cases will vary, but the central tendency will be there.

Bug bounties become standard at large vendors. Microsoft will start paying for responsible disclosure within 2-3 years. Other large vendors will follow. The economic incentive is too aligned for it not to happen.

Some specific incident will produce regulatory attention. A high-profile case of a vendor refusing to patch or a researcher publishing exploit code that produces visible damage will trigger legislative interest. The regulatory response will be ham-fisted; it will, however, formalise some kind of disclosure norm at the law level.

The community will continue to be divided. No specific consensus answer will emerge. The current rough conventions will harden into broad practice but with continued disagreement at the edges. Both the coordinated and full-disclosure positions will continue to have honest advocates.

A small reflection

The disclosure conversation has, on the available evidence, made real progress over the past several years. The cultural conflict that was sharp in 1995 is now less sharp. The conventions are more uniform. The relationships between researchers and vendors are, on the whole, better.

The progress is slow and uneven. There are still bad actors on both sides — vendors who stall, researchers who publish irresponsibly, organisations that exploit the ambiguity. The conventions do not eliminate these; they make them visible against a background of more responsible behaviour.

For my own writing: I expect to continue treating disclosure decisions as worth thinking about explicitly. The choice of how to handle a specific bug is a judgement call; the considerations are real; the writing-down is a useful discipline.

This is the kind of post that does not produce immediate operational guidance. The point is the conceptual clarification — knowing where the disagreements are, what positions exist, how the conversation has shifted. Worth thinking about even when there is nothing immediate to do.

More as the year wraps up.


Back to all writing