Threat hunting without the marketing

If you ask ten vendors what threat hunting is, you will get ten dashboards. None of those dashboards will tell you anything useful about what to do on a Wednesday afternoon when somebody has decided you should be hunting. This piece is for the Wednesday afternoon.

Hunting is hypothesis-led

The single biggest lie about hunting is that it is exploratory. Hunting is not poking around. Hunting is the deliberate act of testing a specific hypothesis about adversary behaviour against your own telemetry. The hypothesis matters more than the tooling. Without a hypothesis, you are not hunting; you are doing log archaeology, and although log archaeology is sometimes valuable, it is not what you should be billing as a hunt.

A workable hypothesis has three properties. It is specific enough to be falsifiable on real data. It is small enough that you can answer it in a working session, not a quarter. And it is grounded in the threat profile of the organisation you are actually defending — not in a generic ATT&CK technique, but in something a plausible adversary against your estate would actually do.

The shape of a hunt

The shape I follow is roughly: premise → expected signals → query → result → outcome. The premise is the one-line statement of what you are looking for. The expected signals are the events you would expect to see if the premise were true on your estate. The query is the actual data work — SIEM, EDR, NDR, logs, whatever your environment offers. The result is what came back. The outcome is one of three things: no evidence found, hypothesis retired; no evidence found, but the data was not there; or evidence found, escalating to incident response or to detection engineering.

All three outcomes are valuable. The middle one is, in my experience, the most valuable for a defensive programme over time, because it gives you a hard, evidence-backed claim to take to your data engineering team. We tried to hunt for X, here is the query, here is the gap, please fix the underlying telemetry. Far more useful than a generic we need better logging request.

Where hypotheses come from

Good hypotheses come from three places, roughly in this order of value. First, the post-incident reviews of your own previous incidents. The behaviour you have already seen is the behaviour you are most likely to see again, and the behaviour your tooling demonstrably failed to catch the first time. Second, the threat reports that genuinely apply to your sector and estate, read carefully and translated into your environment. Third, MITRE ATT&CK, used as a checklist against which to ask would we currently see this? Note that the third source is the lowest-leverage one, even though it tends to dominate the literature. Your own incidents are gold; your sector reports are silver; the ATT&CK matrix is a useful index, not a strategy.

The discipline of writing it down

A hunt that lives only in someone's head is a hunt that did not happen. Every hunt I have ever found durable value in had a small write-up: premise, queries, what we found, what we did not, decision. Two pages, maximum. Stored somewhere a future analyst can find it. The collection of those write-ups becomes, over years, the most concentrated piece of institutional knowledge a defensive function has — better than any ATT&CK heatmap, because it is genuinely your story rather than a generic one.

Without that discipline, hunts have a tendency to be re-run by different people, finding the same dead ends, never quite making it into either the detection library or the known and accepted file. The write-up is what compounds.

When a hunt finds something

When a hunt does find something, the first move is to slow down. Hunts find odd things on a regular basis, and most of those odd things are benign — quirky software, scheduled jobs, a developer doing something unusual. The job is to determine which bucket you are in before you act. If it is benign, document it (so the next analyst does not waste a day re-discovering it). If it is suspicious, hand it cleanly to incident response. If it is reproducible, hand it cleanly to detection engineering. The hunt itself stops at we found something worth handing over; it does not turn into either of the downstream activities by accident.

The shape of a hunting programme

A modest programme runs perhaps two scheduled hunts a week, with quarterly reviews of which hypotheses are still live, which have been converted to detections, and which have been retired. The output over a year is a hundred-odd write-ups, twenty or so new detections, a clear inventory of telemetry gaps, and a much better intuition across the team for what the estate actually does. None of that is photogenic. All of it is durable.

Hunting, done properly, is one of the cheapest defensive activities to run, because it requires almost no new tooling. It just requires discipline, time, and a willingness to write things down.

Related reading

If this piece was useful, the most directly adjacent posts on the site are:

The skills page groups all ten companion articles by area of practice, and the experience page covers the engagements that the practice was shaped by.