Threat modelling that survives contact with delivery

I have written, reviewed, or quietly disowned several hundred threat models over the years. The thing they have in common is that the quality of the document is almost completely uncorrelated with the quality of the security outcome. Beautiful threat models routinely produce no improvement; ugly ones occasionally produce step-changes. The reason is not in the document; it is in what happens around the document.

What a threat model is for

It is for changing the design before the design is fixed. That is the only thing it is for. If the design is already fixed, you are doing a security review; if you are documenting after deployment, you are doing a risk assessment. Both are valid activities, but neither of them is threat modelling, and conflating them is one of the reasons threat modelling has the reputation it does.

The window of useful threat modelling is narrow: it is the part of the project where the architecture is sketched, the trust boundaries are emerging, and the team is open to what if questions because the cost of answering them is still cheap. Outside that window, the activity becomes performative, because the answers cannot easily change anything.

Pick the smallest framework that works

I have used STRIDE, attack trees, kill chains, PASTA, and several bespoke frameworks borrowed from specific clients. They mostly produce similar lists of plausible threats when applied to the same system by the same humans. The framework is a scaffold; the thinking is what produces the value.

If I am working with a team that has never threat-modelled before, I default to STRIDE, applied to a one-page data-flow diagram, with a strict 90-minute time-box. The output is rough, the categories overlap, the diagram is wrong in places — and yet, almost without exception, the discussion surfaces three to five real risks that nobody had explicitly named before. Those three to five are the deliverable. The diagram is throwaway, the document is throwaway; the named risks are not.

Trust boundaries are the thing

If I had to pick one heuristic to keep and throw the rest away, it would be: every interesting threat lives at a trust boundary. A trust boundary is a line on the diagram across which the level of trust changes — between the internet and your perimeter, between an authenticated user and an unauthenticated one, between two services with different access levels, between a developer machine and a production system. Threats happen because something crosses one of those lines without being properly checked, transformed, or constrained.

If you can identify the trust boundaries with confidence, the threats almost name themselves. If the team cannot agree on where the boundaries are, the threat model is impossible — you have a more fundamental architecture conversation to have first, and you should have it before continuing.

Writing the model down

The format I have settled on is dull and unfashionable: a short Markdown document, in the same repo as the system being modelled, with five sections. Assets (what we are protecting and why it matters), actors (who might want to break it, and what they would gain), trust boundaries (where they are and what flows across them), threats (named and explicit, ideally tagged to a framework like STRIDE), and mitigations (one per threat, each with an owner and a verification step).

The entire document fits on a tablet screen. It is reviewable in a single PR. It is updated when the architecture changes, in the same commit as the architecture change. It does not live on a wiki, because wikis are where threat models go to die.

When the threat model says no

The hardest threat models I have run have been the ones where the conclusion was that the proposed design was not safely buildable in the time available, because the trust boundaries were arranged in a way that no reasonable amount of mitigation would secure. Those conclusions are not popular. They are also the most valuable single output a threat-modelling practice can produce, because the cost of saying not yet before the build is one round of redesign; the cost of saying it after the build is a recall.

Building the organisational muscle to hear not yet gracefully is, in some senses, the deeper deliverable of a threat-modelling programme. The framework is incidental; the willingness to have the conversation is what makes the difference.

A final practical note

The single most effective intervention I have ever made on a threat-modelling practice was to insist that the mitigations section name a real, named owner and a real, named verification step. We will validate inputs is not a mitigation; it is a wish. Engineering team Foo will deploy schema validation library Bar at the API gateway, verified by the existing fuzz harness in pipeline Baz, by Sprint 14 is a mitigation. The discipline of forcing the second style is what stops the document being decorative.

Threat models are not magic. They are a small, sharp practice that pays off, in proportion to the discipline you bring to it, over many iterations. Like most security work.

Related reading

If this piece was useful, the most directly adjacent posts on the site are:

The skills page groups all ten companion articles by area of practice, and the experience page covers the engagements that the practice was shaped by.