Two weeks after Aaron Swartz's suicide and the security-research community is — finally — having the conversation about the Computer Fraud and Abuse Act that has been deferred for as long as I have been doing this work. The proximate facts have been comprehensively reported. Swartz, twenty-six years old, programmer of substantial repute (RSS, Markdown, Reddit), digital-civil-liberties activist, downloaded approximately four point eight million academic articles from the JSTOR archive between September 2010 and January 2011 using a laptop placed in a network closet at MIT. JSTOR did not pursue charges. The federal Department of Justice did, charging Swartz under the CFAA with thirteen felony counts that carried, in aggregate, a maximum sentence of thirty-five years and substantial fines. The plea deal offered was seven years; Swartz refused it. He was found in his Brooklyn apartment on the eleventh of January.
I am going to write less about Swartz himself and more about what the case tells us about the legal infrastructure under which security research now operates, because that is the part where my own work intersects with the case. Swartz was not a security researcher in the standard sense; he was an activist who used the tools of the security-research community to make a political point about access to publicly-funded scholarship. But the CFAA charges against him — specifically, the interpretation that "exceeded authorised access" applies to violation of a website's terms of service rather than to actual circumvention of access controls — is the same interpretation that has been used against actual security researchers for years, and it has the same chilling effect on the engagement work I do that it has had on academic security research.
The structural problem with the CFAA is that "without authorisation" and "exceeding authorised access" have, since the 1986 statute was passed, been interpreted so broadly that any internet activity that is technically legal but commercially or politically inconvenient can be criminalised at the discretion of a sufficiently motivated prosecutor. The implications for penetration-testing work I do through Hedgehog are, I have been advised by the engagement-team's lawyer at multiple points, already operationally meaningful. The standard pen-test contract has to be drafted carefully — written authorisation, scoped permissions, indemnification language, named systems and time windows, and an explicit list of individuals authorised to consent. Any engagement that proceeds without comprehensive documentation creates the same kind of legal exposure that Swartz had at MIT — operationally legal in the conventional sense, defensible if it goes to court, but capable of being charged in ways that produce, as the prosecutor's leverage, the kind of multi-year-prison-sentence threat that Swartz received.
Larry Lessig's post the day after Swartz's death is the right reference point on the prosecutorial-discretion question; his subsequent piece in The Atlantic extends the argument to the wider question of how the legal-and-financial infrastructure around US digital-civil-liberties cases distorts prosecutorial decision-making. The piece I have been talking about with peer practitioners over the past fortnight, Tim Wu's New Yorker column, articulates the broader observation: under current CFAA interpretations, every internet user who has used a website in ways the operator did not anticipate has technically committed a federal crime, which means the law applies through prosecutorial discretion rather than through legal definition.
Zoe Lofgren's "Aaron's Law" proposal — to amend the CFAA so that "exceeded authorised access" cannot be charged based on terms-of-service violations — is the structural fix, and it is exactly the right structural fix. Whether it actually passes is a separate question; the entertainment-industry coalition and the law-enforcement coalition that have, separately, used the broad CFAA interpretation for their own purposes are unlikely to support narrowing it. The political shape of the SOPA-PIPA fight from a year ago is going to repeat in some form, and it is not clear that the legislative coalition Lofgren has assembled is broad enough to win it.
For my own work, the case has tightened the documentation discipline for engagements. I have always been careful about scope and authorisation; I am now more careful, and the standard contract has been revised in three places to make the authorisation chain more explicit. The conversation with engagement clients about what is and is not in scope has changed in tone — clients are more willing than they were six months ago to accept narrower scopes precisely because the legal exposure of broader scopes has become more visible. The security-research community's response to the Swartz case has, if anything, made the engagement work I described in September easier in this narrow respect, even as the broader implications are uncomfortable.
The wider point about the legal climate is one I have been thinking about for some years and have not previously written because I do not have a clean answer. The infrastructure of internet research depends on a culture of permissive technical exploration that is, increasingly, in tension with the legal regime under which it operates. The security-research community has — through DEF CON, through the Phrack archive that ran through the 1990s and 2000s, through countless personal projects and academic studies — produced most of what we now understand about defensive-engineering best practice. None of that work would have been possible under the CFAA as it is now being interpreted. The cost of the present interpretation is not visible in single cases; it is visible in the things that are not done because the legal exposure is too high. Swartz is the data point that is making it possible to talk about this honestly for the first time.
The next post is probably the Mandiant APT1 piece, which several of my correspondents suggest will land in the next two to three weeks and will be operationally substantial. Or whatever surfaces from the Microsoft and Adobe security advisories that are due in the second week of February.