About a fortnight ago I came across a small mailing list called Wargames, started by Lance Spitzner at Sun, for people interested in deploying honeypots in operational settings. The list is not a public Bugtraq-style firehose; it is invite-quiet, with about forty subscribers when I joined, including some names from the deception-and-counterintelligence side of the discipline that I had not seen contribute publicly before.
This is the kind of group that is going to matter. I want to write about why, and about what the conversations on it are doing to my own thinking.
What the list is
Lance has been quietly running honeypots at Sun for a year, mostly to gather intelligence about who is probing the network and how. The Wargames list emerged out of his realisation that several other practitioners — people working in academia, at vendors, at large enterprises — were doing similar work in isolation, with no shared experience to draw on.
The list is, in practice, the first serious operational community for honeypots. The early discussions have been about:
- How to make a honeypot interesting enough to attract real attackers, not just port scanners.
- How to log what an attacker does after they have gained shell access, without the attacker noticing the logging mechanism.
- The legal questions: am I allowed to log what an attacker does on my own machine? What if their actions originate from somewhere with stricter privacy laws than mine?
- The ethical questions, more pointed than the legal ones: am I obliged to clean up after the attacker has left? Do I let them continue to use my honeypot to launch attacks against third parties? If not, how do I stop them without making the honeypot less useful?
These are not, on the whole, the questions a DTK installation raises. DTK is a low-interaction honeypot — it emulates a service well enough to be probed but does not let attackers in. The Wargames list is mostly people running high-interaction honeypots: real systems, real shells, with the attacker actually compromising them, and the operator watching everything they do.
What that changes
Low-interaction honeypots are mostly an automated phenomenon. Attackers who hit them are running scripts. The script does its probe, the honeypot answers, the script moves on. The data you get is breadth — many sources, similar shape.
High-interaction honeypots are a different beast. The attacker who compromises a high-interaction honeypot is, by definition, not running a script. They have a shell. They look around. They pull tools onto the system. They try to escalate or persist or pivot. Every one of those actions is enormously informative — it shows you what an attacker who has succeeded actually does next.
This is data nobody else has. Every operator's experience of post-compromise activity is constrained to the rare events when their actual production systems get compromised. Honeynets, properly run, give you the same data on tap.
The trade-off is operational complexity. A high-interaction honeypot is a real machine doing real things, including possibly real harm if you let the attacker pivot through it. The setup needed to safely contain that — to log everything but allow nothing harmful out — is non-trivial.
A specific thread that has stuck with me
One of the most interesting threads on the list this week was about not alerting the attacker to the fact they are in a honeypot.
A real production system has a particular texture. There are users logged in, doing things. There are cron jobs running. There are mail spool files that have actual mail in them. There is ~/.bash_history for each user, with old commands. There is a regular pattern of CPU utilisation, network activity, and process churn.
A freshly-installed honeypot has none of this texture. The first thing a competent attacker does after gaining a shell is w and last and ps -ef. If they see a single root login, no other users, no historical activity, and a process list that consists of the honeypot daemons and nothing else, they conclude — correctly — that they are not on a real system.
Making a honeypot look used is, the thread argues, more important than making the services behave correctly. There were several proposals for how to do this: real but anonymised user history files, scripted background activity that looks like email and web browsing, fake-but-plausible cron jobs.
This is a much more nuanced craft than I had appreciated. The honeypot, in the high-interaction sense, is not a piece of software you install. It is a small, carefully constructed fiction that the operator is asking the attacker to believe.
Where this is going
My strong hunch — based on the energy of the list and the calibre of the people on it — is that the Wargames group is the seed of something formal. By next year I would expect a properly named project, a public site, perhaps a research paper documenting what they have learned. There is the right combination of practitioner experience and academic curiosity in the room.
For my own work, the immediate effect is that I have been re-reading my own small honeypot with a more critical eye. It is unmistakably low-interaction. It is also unmistakably empty — no user history, no service realism, nothing that would survive five seconds of ls -la from a real attacker.
The next iteration of my honeypot is going to be a real installed system, in a contained environment, with deliberate texture. I am going to build it slowly, one realistic detail at a time, and write about each detail. The process will, I think, teach me more than any single piece of finished software ever could.
If the Wargames thinking turns into a public project, I will be among the first to subscribe to whatever public discussion replaces the private list. For now, I am grateful to be on it.