Last week I read Dittrich's analysis of Trinoo — also called Trin00 — a new attack tool that has been seen in the wild this month. It is the first widely-discussed example of a distributed denial of service tool: one that uses many compromised hosts in a coordinated attack against a single target.
This is a category change in network attacks. I want to explain what Trinoo does, why distributed attacks are harder to defend against than single-source ones, and what the operational response is going to have to be.
What Trinoo is, mechanically
Trinoo is a three-tier tool.
The attacker controls one or more masters. The masters are themselves usually on compromised hosts — they let the real attacker hide behind them.
The masters control hundreds or thousands of daemons, each running on a separately compromised host. The daemon-to-master communication uses a custom UDP protocol on specific ports.
The daemons are the attack source. When commanded by the master, each daemon launches a UDP flood at a specified target.
The attack is, in protocol terms, a UDP flood — packets sent to a target's address at random destination ports. The target's network stack, in attempting to handle each packet (sending an ICMP Unreachable in reply, since the destination ports have no service listening), is overwhelmed. Throughput collapses. Legitimate traffic cannot get through.
The distinguishing property of Trinoo is scale. A single daemon's flood would be unremarkable. A thousand daemons, all flooding at once, produce a coordinated attack that aggregates to enormous bandwidth — potentially tens or hundreds of megabits per second from each individual source, and the sources are scattered across the internet so the receiving end sees them as legitimately-distinct hosts.
Why this is a different problem from single-source DoS
The traditional defence against denial of service is source filtering. If a single host is flooding you, identify the source IP, install a filter at your perimeter (or, ideally, at your upstream's perimeter), and the flood stops.
This works because the attack has a small number of sources. The filter is a small set of IP-level rules. The bandwidth needed to deliver the filter is much smaller than the bandwidth of the flood being filtered.
Distributed DoS breaks all of this.
There is no small set of sources. A Trinoo-style attack uses hundreds or thousands of distinct, legitimate-looking IPs. Filtering them is not a small list; it is a constantly-changing list because the attacker can rotate which daemons are firing.
The sources are real victims, not the attacker. The hosts running Trinoo daemons are themselves compromised machines belonging to other parties. Filtering them harms innocent hosts; not filtering them does not stop the attack.
The bandwidth asymmetry is the wrong way around. The defender's filter has to process every flood packet to drop it. The attacker's daemons just send packets. If the flood saturates the defender's link, the link is congested even though the filter itself works correctly.
This last point is critical. A filter at the defender's edge does not help if the attack saturates the upstream link feeding that edge. The packets are dropped at the filter, but they were already on the cable, displacing legitimate traffic.
The only filter that works is one applied upstream of the bottleneck — at the ISP or carrier level, where bandwidth is plentiful and the flood traffic can be discarded before it congests the defender's link. This requires the ISP's cooperation, on a timescale shorter than the attack.
What the defensive response needs to be
For an individual operator, the attack is not really defendable. There are mitigations:
Have plenty of bandwidth headroom. If your link is mostly empty, an attack has to be larger to fill it. This is a financial defence, not a technical one — over-provisioning costs money — and the attacker can scale up faster than you can.
Have good relationships with your upstream. When the attack hits, you need your ISP to apply filters at their edge, fast. This requires a known contact, an out-of-band communication channel (because your in-band channel is the one being attacked), and sometimes prior agreement on what filters they will apply.
Have rate limiting at every layer where you control it. If your application can decide at the kernel level to drop packets from sources sending more than X packets per second, the saturation problem is at least reduced.
Have an incident response procedure that does not depend on the attacked link. If your in-band management channel is down, you need a different way to talk to the people who can help.
Have logs of normal traffic patterns so you know what "abnormal" looks like during the attack.
For the broader internet, the response is harder. The compromised hosts running the daemons are themselves the result of poor security at thousands of operators. Source-address validation — what the IETF calls BCP 38 — would help: if every ISP filtered outgoing traffic to ensure source addresses really do belong to that ISP's customers, spoofed-source attacks become impossible. BCP 38 has been a known recommendation for years; adoption is patchy.
More aggressively, distributed defence would help: a way for ISPs to communicate, in real time, about ongoing attacks, and to apply coordinated filters before the attack reaches the bandwidth bottleneck. This does not exist as a standardised mechanism. It is starting to exist informally between the largest carriers.
What I think happens next
My strong guess is that Trinoo is the first of a series. The next few attack tools will refine the protocol, vary the flood type (away from UDP toward SYN, ICMP, and various amplification schemes), and grow in scale.
I wrote in January that distributed denial of service was going to become a thing people had heard of. That prediction is now visibly playing out, faster than I expected. By year-end I would not be surprised to see a major site taken offline for hours or days by a DDoS attack with national-news consequences.
The medium-term defensive response is going to be capacity-based. The largest sites will buy enormous bandwidth headroom and use specialised hardware to absorb floods. The smaller sites will not be able to afford this, and will rely on their upstream's mitigation services — which will become a real product category over the next two or three years.
For the rest of us — for the small operators with a single uplink and modest budget — the answer is, frankly, that there is no good answer. We will be vulnerable to the next decade of distributed attacks because the economics of defence do not yet exist at our scale.
This is going to define the next phase of network security. I will be writing about it for years.