Continuing last year's Halloween tradition. Five log entries from 2000 that, on first reading, cost me an hour of stomach-clenched investigation. As before, the writeup is in rough chronological order, with the lesson each one taught.
February: the wrong-source attack
Feb 17 09:14:23 mailserver postfix/smtpd[3421]: connect from unknown[10.0.5.42]
Feb 17 09:14:23 mailserver postfix/smtpd[3421]: AUTH success: user=admin
Feb 17 09:14:23 mailserver postfix/smtpd[3421]: relay grant
The pattern: someone successfully authenticated to my mail server with an admin account from a 10.0.5.42 address, which is a private IP I do not own.
The initial worry: this looked impossible. The mail server only listens on the public IP; how was something on a 10.0.5.42 address connecting to it?
The investigation: I traced the connection. The 10.0.5.42 address was the internal IP of the mail server itself in a particular routing table sense. A misconfigured procmail rule had been forwarding outgoing mail back through the SMTP daemon (essentially a loopback), and the loopback was authenticating with a stored credential.
The mail-server-talking-to-itself pattern is mostly benign but it had been silently producing log entries that looked like an external admin login from a non-routable source. I had not noticed the pattern before because I had not been looking for it.
The lesson: not all suspicious log entries are attacks. Some are misconfigurations on your own side that produce attack-looking patterns. The investigation discipline includes ruling out your own contribution before assuming an attacker.
April: the sequential-port-knock
Apr 8 02:47:01 fw kernel: DROP TCP 203.0.113.X:55001 -> 192.0.2.10:21
Apr 8 02:47:31 fw kernel: DROP TCP 203.0.113.X:55001 -> 192.0.2.10:23
Apr 8 02:48:01 fw kernel: DROP TCP 203.0.113.X:55001 -> 192.0.2.10:80
Apr 8 02:48:31 fw kernel: DROP TCP 203.0.113.X:55001 -> 192.0.2.10:443
Apr 8 02:49:01 fw kernel: DROP TCP 203.0.113.X:55001 -> 192.0.2.10:8080
Five separate firewall drops, from a single source IP, at 30-second intervals, sequentially trying common HTTP-related ports.
The initial worry: the timing was suspiciously regular. Most port scans are either fast (hundreds per second) or slow but random (random port order, random timing). This was slow and deterministic — exactly 30 seconds apart, exact order. It looked like an automated probe with very specific parameters, possibly a port-knock attempt to trigger something specific on my firewall.
The investigation: cross-referenced the source IP against the threat-actor literature. Found it had been associated with a particular research project — academics at a US university running a controlled-experiment scan to study how various firewalls respond. The 30-second interval was deliberate; they were trying not to overload the targets.
The lesson: the texture of activity is informative. Specific timing patterns are sometimes the signature of legitimate research, not malice. Distinguishing between research scans and adversarial scans is hard from the data alone; the answer often requires correlation with broader context (the source IP's reputation, the timing of the broader scan campaign, sometimes a polite email to the source operator).
June: the apparent-rootkit
Jun 14 23:03:11 honeypot kernel: ICV check failed at sector 412847
A single log entry, on the honeypot, late at night. I happened to be reading logs.
The initial worry: "ICV check failed" is the integrity-check failure message from one of the Tripwire-style monitoring tools I had installed. The specific tool was a kernel-level integrity-check that I had quietly running, comparing the in-memory copies of certain kernel structures against known-good baselines. The message indicated something had changed in the kernel structures.
This is exactly the signature of a kernel rootkit deployment. The on-host detection is the only way to catch this; off-host detection cannot see kernel-internal changes. If the integrity-check tool is firing, something has modified the kernel in a way the attacker would normally hide.
The investigation, urgent: I powered the honeypot down (rather than rebooting) and pulled the disk for offline analysis. From a clean boot environment, I compared every binary in the honeypot's filesystem against known-good hashes. No modifications.
Further investigation revealed that the integrity-check tool itself had a bug — under specific kernel-load conditions, it occasionally produced false positives on a particular structure that legitimate kernel activity sometimes touches. The structure was being modified by legitimate kernel code, not by an attacker.
The lesson: integrity-check tools have their own failure modes, and a single alert from one is not yet evidence of compromise. The investigation discipline is to verify with independent methods before reaching the worrying conclusion. Pulling the disk for offline analysis was the right response; the false positive turned out to be the cause but the response was not wasteful.
August: the legitimate-looking exfiltration
Aug 22 14:31:07 fw kernel: ACCEPT TCP 192.0.2.10:54123 -> 203.0.113.X:443
[connection lasts 47 seconds, transfers 12MB outbound]
A legitimate-looking outbound HTTPS connection from one of my hosts to an external IP, transferring 12MB outbound in 47 seconds.
The initial worry: 12MB of outbound traffic from one of my hosts is unusual. None of my services should be sending 12MB to external destinations. The HTTPS encryption meant I could not see what was actually being sent.
The investigation: I traced the connection to its source. It was from the laptop I had been using at the conference earlier in the month; the conference had given me a temporary credential to a cloud-storage service, and a backup script I had set up was uploading my recent honeypot data to the cloud storage as a remote backup.
This was legitimate. I had genuinely set this up. I had forgotten about it.
The lesson: my own configuration changes can produce alarming-looking traffic patterns. The investigation discipline includes asking what have I done recently before assuming an attacker. A change-log discipline (which I had been intermittently following) helps with this — if every configuration change is logged, the question "did I do this?" has a definitive answer.
I have started keeping a more rigorous change log since.
September: the cleared bash history
Sep 11 18:42:31 webserver auth: pam_unix(sshd:session): session opened for user me
Sep 11 18:42:33 webserver audit: user me ran: ls
Sep 11 18:42:34 webserver audit: user me ran: history
Sep 11 18:42:35 webserver audit: user me ran: history -c
Sep 11 18:42:35 webserver audit: user me ran: exit
A short SSH session as my own user. The session ran history, then cleared the bash history, then exited.
The initial worry: clearing bash history is the standard cover-tracks command of an attacker. If I had not done this, someone with my credentials had.
The investigation: I logged in immediately (from a different terminal), checked the audit log on the off-host monitor for matching activity, traced the source IP to a specific PPP session at my ISP. The PPP session was indeed mine — I had logged in earlier in the day from my laptop, run the commands myself, and not remembered.
The specific reason I had cleared the history: I had been showing a friend over the shoulder how I do certain operations and did not want my own commands cluttering the history. The clear was deliberate. I had forgotten about it.
The lesson: my own deliberate actions can look exactly like attacker-cover-tracks behaviour. The investigation discipline cannot rely solely on the literal log content; the question of was this me needs an answer, and "I do not remember" is not a sufficient answer when the alternative is an active compromise.
A pattern is emerging across these examples. Several of them turned out to be benign for reasons that depend on knowing my own configuration and recent activity. The investigation cost was real even when the result was "this was me". The cost is unavoidable — the cost of investigation is what makes the signal of investigation meaningful.
A small reflection on the year's investigations
Looking back at all of these, the proportion of false alarms in my own investigation work is high. Most alarming-looking log entries this year have turned out to be benign. This is not a failure of my detection; it is a property of detection at all. A detection system that only fires on real attacks would necessarily miss many real attacks. A detection system that fires more broadly catches attacks at the cost of also catching benign anomalies.
The right mental model is that my detection is producing a stream of "things worth investigating", not a stream of attacks. The investigation work distinguishes attack from non-attack. The cost of the false-alarm fraction is the cost of having a sensitive detection system. Reducing the false-alarm rate by reducing detection sensitivity is, on the available evidence, worse than the cost.
This is, I think, the right operational mindset. Worth writing down.
Have a safe night. Watch your logs.