Dyn, the managed-DNS provider, was the target of a sustained DDoS attack throughout Friday that took several large customer services offline for periods of an hour to several hours. Twitter, Netflix, Spotify, Reddit, GitHub, Airbnb, and dozens of other major internet services depend on Dyn for authoritative DNS, and were reachable during the outage only by users whose recursive resolvers happened to have cached recent records. The attack ran in three waves through Friday morning and afternoon US Eastern Time (Dyn analysis post-incident statement).

The attack vector was DDoS against Dyn's authoritative DNS infrastructure, sourced predominantly from Mirai-controlled IoT devices. The peak traffic volume has been variously estimated at 1.2 terabits per second by some sources and somewhat lower by others; Dyn has not published a definitive figure, and the volume sourced specifically against Dyn versus the volume reflected through Dyn against other targets is not fully separated in the public reports. What is clear is that the attack volume materially exceeded what the Krebs attack a month ago produced, and that the IoT-device population producing the attack has grown.

The Mirai source code was published on Hackforums on the 1st of October (anna-senpai post archive at malwaretech), which has had the predictable effect of producing multiple independent Mirai-derived botnets operated by different actors with different targeting preferences. The "anna-senpai" release post described the publication as a deliberate withdrawal from the operation, but the more consequential effect has been the proliferation of derivatives. The Dyn attack, on the early analysis being published this weekend by Flashpoint and Akamai, appears to have been from at least one Mirai variant. Whether the actor is the original Mirai operators, one of the derivatives, or an independent group using the published source remains unclear.

The infrastructure question is the part that needs writing about. The DNS layer is, structurally, a thin layer that sits between users and most internet services, and the failure of any major DNS provider produces cascading failures across that provider's customer base. Dyn's customer base includes a substantial fraction of the internet's most-used services. The architectural decision by those services to use a single DNS provider — rather than splitting authoritative DNS across multiple providers, as Cloudflare and Amazon Route 53 customers can — has been, for years, the convenient default. Friday's outage demonstrates the cost of that default. The remediation, where the customer organisation can implement it, is multi-provider DNS: registering authoritative nameservers from two or more independent providers, so that an outage at one provider degrades but does not eliminate name resolution. Several of the major Dyn customers were, by Friday afternoon, in the process of activating exactly this kind of failover. Several others did not have it configured.

For our customer organisations, the operational question is how the DNS architecture for each customer's external services is structured. Browne Jacobson uses a major UK-headquartered DNS provider with a separate global anycast network from Dyn; clean for Friday's specific attack but not necessarily for any future attack against the same provider. Towry uses a hybrid arrangement with primary at one provider and secondary at a second provider — appropriate for the trading-platform's availability requirements and resilient to single-provider outages. Northcott's external services are more limited and use a smaller specialist provider; the smaller-provider question is interesting in itself, because while smaller providers are less likely to attract the largest-scale attacks, they are also less able to absorb the attacks they do attract. The manufacturing client's external posture has not had the level of attention that the financial-services and legal customer estates have had, and is an action item for next week.

The wider DDoS-resilience conversation is shifting. The volumetric scale of attacks that the Mirai-class botnets can produce has, this autumn, exceeded the capacity of all but the largest infrastructure providers to absorb. The implication for DDoS protection is that the bar for "adequate" protection has moved up substantially. Self-hosted protection — even with substantial bandwidth provisioning — is no longer adequate. CDN-level protection is, for most internet services, the baseline. Specialist DDoS-protection services with global anycast capacity are needed for organisations that would attract the largest attacks. The cost question that this raises for organisations whose budgets have not been planning for the new threshold is going to be live in board conversations for the rest of the year.

The political question, separately, is whether the IoT-device population that produces the attacks will be addressed structurally. The Friday attack will probably accelerate the regulatory conversations I noted in the Mirai-Krebs post. The European Commission's Cybersecurity Package consultation, the FTC's IoT-security workshop output, the various congressional discussions in the US — the pace of those conversations needs to match the pace of the threat, and at the moment, it does not. The IoT device population continues to grow at substantially faster than the rate at which any regulatory framework is catching up.

The next several days will produce more analysis. I will write more as it lands.


Back to all writing