The Coviello letter from Monday — the second since the March breach — is, finally, the admission. RSA acknowledges that "certain characteristics of the attack on RSA indicated that the perpetrator's most likely motive was to obtain an element of security information that could be used to target defence secrets and related IP", and that "the recently-carried-out attacks on Lockheed Martin's network ... are an extension of the original attack on RSA". The seed records, or whatever was actually taken, have now been used. The free SecurID-replacement programme that Coviello has now opened to defence-sector customers and a wider risk-assessment programme to the rest is the operational acknowledgement of what I and other CISOs were arguing in March — that the proper interpretation of the original breach was to treat the SecurID seeds as a shared secret known to a sophisticated attacker.
The Lockheed Martin attack itself, on the night of the twenty-first of May, looks like the textbook case of how that compromised information was used. Lockheed have published less detail than I would like. What is in the public record is that valid SecurID-derived passcodes, in combination with credential information that the attacker had presumably obtained through earlier reconnaissance, were used to attempt remote access to Lockheed systems; that the attempt was detected through anomaly-detection on the authentication patterns; that Lockheed's response was to shut down VPN access entirely and require all approximately ninety thousand SecurID tokens to be re-seeded. That is a substantial operational lift — seven figures of token replacement plus reissue — and it is the kind of cost that makes the original RSA letter look more reasonable in retrospect than it felt at the time. RSA was, to a meaningful degree, telling the truth about what they did not yet know.
The broader picture of the past three weeks is that Lockheed is not alone. Reuters reported L-3 Communications acknowledging an attack in April that used cloned SecurID information; Northrop Grumman quietly disabled their remote VPN access in late May for what they have not publicly described as connected but which was widely understood to be the same pattern. The cluster of US defence contractors hit through April and May is the operational cost of the March compromise expressed concretely. RSA may take the financial cost of the token replacement programme, but the strategic cost — months of incident response, partial loss of remote-access infrastructure, and an enduring question about what the attacker actually exfiltrated from each contractor's network during the windows of compromise — is paid by the customers.
The pattern this validates is the one the Aurora analysis was beginning to point to and which Stuxnet sharpened: defensive vendors are attractive targets precisely because their infrastructure is leverage against the customer base. Compromise SecurID; use the seeds against ninety thousand Lockheed tokens. Compromise Comodo via a reseller; issue certificates against Google, Yahoo, Mozilla. Compromise the certificate-authority business model itself; do whatever you want for as long as the man-in-the-middle position holds. The supply chain in each case is the attack surface.
I have been writing this up for the engagements over the past week. The wording I have settled on, for the boards I am sitting in front of, is: "the assumption that your information-security infrastructure is delivered to you in working order should be treated as an active risk, not a baseline". The follow-on is: "what compensating controls are in place for the case where one of your security vendors is compromised, and how would you know". Most boards I have asked have not had an answer. The conversation that follows is more productive than the one we were having three months ago. There is something to be said for incidents at this scale forcing the conversation; the cost is paid by the people who happen to be holding the affected technology when the attacker chooses to use it.
Two technical things I want to come back to in subsequent posts. The first is the anomaly-detection that caught the Lockheed intrusion. Lockheed have not described it in detail but there are reasonable-sounding accounts of "unusual authentication patterns" being the trigger. If those accounts are right, the detection-time was a matter of hours rather than days, and the response was therefore deployable before substantial exfiltration. That is a meaningfully better outcome than the equivalent incidents at retail and consumer-platform operators where detection has been weeks or months. There is something to learn there about what makes defence-contractor incident-response work that is not present in most commercial organisations. Second, the SecurID token-replacement programme itself: ninety thousand tokens is a substantial logistics exercise, and the operational shape of it — how RSA and Lockheed coordinated, what the rotation schedule looked like, what compensating controls were in place during the rotation — has lessons for any organisation that has a single-point-of-failure security vendor. I will write about both when more material is in the public record.
The next post is likely the SQL-injection methodology piece I have been promising the engagements team since January. The CA story has not produced a second incident yet but it is a question of when, not whether, and I do not want to be in the middle of a writeup when it does.