What I expect to be reading about in 1999

Back at the keyboard after a quiet week. The new year always tempts me to write a predictions post, which is a foolish thing to do publicly. I am going to do it anyway, because the discipline of putting expectations on paper is the only honest way to find out, twelve months from now, where my mental model was wrong.

Five things I think 1999 will be about, in the corner of computing I am paying attention to.

1. The community ruleset for Snort matures

Snort has been in the wild for under three months. There is already a small but active community sharing rules. By spring I expect this to coalesce into a recognisable corpus — probably an organised distribution somebody is maintaining, possibly under the project itself.

The interesting question for me is rule quality. The first generation of rules tends to be over-broad: catch a string that looks bad, alert. This generates a lot of false positives. The maturation curve, in any pattern-matching system, is the same — rules become more specific, more contextual, with negative conditions to exclude known-good traffic. I expect to spend a lot of evenings rewriting my own rules along this curve.

There is also the architectural question of what happens when the rule count gets large. My machine — a Pentium 75 with 32 MiB of RAM — already feels strained at a few hundred rules running against my own modest traffic. Either the engine gets faster, or rules get smarter, or we move to dedicated hardware for serious deployments. I think all three.

2. Honeypots become a category

Fred Cohen's Deception Toolkit was the proof that the idea was operationally tractable. I do not think DTK is going to be the only such tool by year-end. There are murmurings on the firewall-wizards list about more sophisticated emulation, and Lance Spitzner's writing about deception at Sun has the feel of someone who is about to do something organisationally serious.

The real shift, though, is going to be conceptual. Honeypots are still classified as a research curiosity. The argument that they are a production tool — that any operator who has a perimeter should also have something on it that is meant to be probed — is going to get louder. I think the first organisations will adopt them seriously this year.

3. Distributed denial-of-service becomes a thing people have heard of

Until now, denial of service has been mostly single-source. SYN floods from one machine. Ping floods from one machine. Mostly identifiable, mostly blockable.

The pieces are visibly assembling for something different. Compromised hosts coordinated to attack a single target, controlled remotely. The community has been talking about this in theory for at least a year. The tools to do it in practice are starting to appear in private. By year-end I would not be surprised to see a public DDoS tool with hundreds of compromised hosts behind it, hitting a single target hard enough to be news.

The defensive question this raises is unsolved. Source filtering at the perimeter — which is how you handle single-source DoS — does not work when the sources are thousands of legitimate-looking IPs. Capacity is the only answer for the receiver, which means the asymmetric economics get worse.

4. Y2K teaches us something other than what people are saying

The Y2K conversation in the press is about whether systems will survive midnight on the 31st of December. Most will. Some will not, in interesting but mostly survivable ways.

The interesting Y2K story is going to be the security regressions introduced by the remediation work. Every legacy system being patched right now is being modified under time pressure, by teams who have not touched it in years, with reduced testing. This is the literal recipe for accidentally introducing new vulnerabilities. I expect 2000 and 2001 to see a steady drip of CVE-style advisories where the proximate cause is a Y2K fix that did one thing and broke another.

The wider lesson — that pressure-driven change to old systems is a security event in its own right — is one I do not see being made enough.

5. The conversation about disclosure intensifies

The argument over full disclosure on Bugtraq is not going away. Vendors hate it. Researchers find it the only mechanism that gets bugs fixed. Users mostly do not have an opinion until something happens to them.

What I expect this year is for the conversation to formalise. Some kind of "responsible disclosure" position will emerge — perhaps a set of community norms about how long to wait between notification and publication, with named exceptions for emergencies. The norm will not satisfy the absolutists on either side, which is how you can tell it is the right norm.

What I am going to do about all of this

Mostly: keep reading. Some: write rules and contribute them upstream. A bit: build my own small honeypot, a step beyond DTK, as a project I can write about month by month.

More on each of these as the year goes. End-of-year scoring on these predictions is a hostage to fortune I am willing to give.


Back to all writing