Last Tuesday my home internet connection went off the air for about six hours. The cause turned out to be a small DDoS aimed at the IP range that includes my honeypot. My first personal experience of being on the receiving end. Worth writing about.
What happened
At about 14:00 BST I noticed my mail had stopped delivering. A ping to my own external IP failed. Nothing seemed to be working from outside.
From inside (the local network), things were normal — DNS resolved, my LAN was up, my hosts could talk to each other. The problem was with my external connectivity.
I called my ISP. After about thirty minutes of being held in queues, I reached someone who could see my line. They confirmed my line was healthy at their end but that traffic to and from my range was being heavily filtered. They were, on their end, observing what they described as "a fairly large UDP flood" aimed at my IP range.
They applied an emergency filter at their edge — silently dropping the flood traffic before it reached my line — and within about ten minutes my connectivity was usable again. The filter remained in place for several hours; the flood continued at varying intensity for about six hours total before stopping.
What it looked like to me
From my perspective at home, what I could see:
-
During the attack: nothing. My link was saturated by inbound flood; my outbound traffic could not get out; I could not even confirm what was happening because I could not reach external diagnostic services.
-
After the ISP filter was in place: I could see my own structured logs catching up. The honeypot's tcpdump capture had recorded the flood as long as my disk had survived the rate. About 90 minutes of capture before disk pressure forced me to cycle the recording. The attack was a UDP flood from approximately 200 distinct sources, aimed at randomised destination ports on the honeypot's IP and a few other IPs in my range.
-
After the attack ended: a fuller picture. Inside the honeypot's logs, I found that someone had attempted to compromise the honeypot via several known vulnerabilities the day before. Each attempt had been blocked. The connection between the failed compromise and the subsequent flood was, on the timing, suggestive: someone was unhappy that they could not get into the honeypot and decided to flood the IP range as retaliation.
This is, on the available evidence, a small grudge attack. Not commercial, not opportunistic, just someone annoyed and able to deploy a few hundred Stacheldraht-style daemons against me. Modest by Mafiaboy standards but more than enough to take my home line offline.
What worked in the response
A few things, written down for my own future use.
Having an out-of-band channel to my ISP. I had the NOC's direct number written down, the account number memorised, and a basic understanding of what they could and could not do. Without these I would have spent the entire six hours on hold.
Being able to articulate the technical situation. When I reached the operator, I could describe the symptoms in network terms: "UDP flood, multiple sources, my IP range". This helped them go straight to the right diagnosis.
Pre-existing relationship. I have been with this ISP for several years, have a small business account, and they recognise me as a technically-competent customer. The filter they applied was something they would not have applied for a residential customer; the request from a known business customer with a clear technical problem was easier to act on.
Off-host logs. Most of my structured logs are forwarded to a friend's machine in another city. When my link was down, the logs were still being recorded — both the catch-up of buffered events from before the attack and the (much-reduced) trickle of events that made it through during. Without off-host logging, I would have lost most of the diagnostic data.
What did not work
Three things that I now want to fix.
My response checklist was incomplete. I had a list of "in case of incident" actions that I had written but never run as a drill. When the actual incident hit, several items on the list turned out to be wrong — the contact number had changed (I had not updated it), the account number was for a different account I had since closed (I had not noticed), one of the diagnostic tools relied on outbound connectivity (which I did not have during the attack).
I had no way to communicate during the outage. My phone is on a different network, but my email was on my own infrastructure. Friends trying to reach me to ask if I was OK could not. I had no public status page that would have stayed up. The communication failure was the most uncomfortable part of the event.
My honeypot's containment had a small gap. When I examined the honeypot's logs after the attack, I found that the flood had also hit the honeypot's IP, not just my main IP. The honeypot had handled it correctly (firewall dropped everything), but the visibility I have into the honeypot is reduced when the network is congested. Some events that I would have seen normally were lost. This is something I can mitigate by routing the honeypot's logs through a different network path, which I will set up.
What I have changed
A short list:
Updated my response checklist with current contact information, current account numbers, and removed steps that depend on outbound connectivity.
Set up a status page on infrastructure outside my own network — specifically, a small static page on a friend's web server that I can update via SSH when I have any connectivity at all. The page will say what is up and what is down, and is the place I will direct people during outages.
Added an alternative communication route — specifically, a mobile-phone-based out-of-band SSH connection to a friend's machine, so I can update the status page even when my main link is fully down.
Routed honeypot logging via a different ISP through an SSH tunnel to a friend's network, so honeypot data continues to be captured even when my main link is congested.
Talked to my ISP about emergency-filter procedures — specifically, what kinds of attack they will filter, what timescale, what authentication procedure I need to follow to request a filter. The conversation produced a documented procedure I now have a copy of.
None of these are dramatic. All of them are small, cheap, useful improvements. The cumulative effect is to make the next incident less disruptive than this one was.
A small reflection on what this proves
The attack was small. The flood was probably in the 50-100 Mbit/s range — modest by 2000 standards. It was sufficient to take my modest residential-business connection offline.
This is the structural problem with capacity-based defence at the operator level. My link is not big enough to absorb even a small distributed attack. The defence has to be upstream of my link, at the ISP. My ability to respond depends entirely on the ISP's capability and willingness.
For my scale of operation, this is acceptable. My ISP applied the filter; the disruption was bounded; the attack stopped before becoming serious. For a larger operation that required uptime, the same shape of attack would have been a different kind of incident.
The whole event was, in retrospect, a useful exercise. I have been writing about DDoS for over a year. Now I have a small personal experience to add. The writing about it from now on will be slightly more grounded than the writing about it before.
More as the year develops. Back to regular technical content next.