Uber's CEO Dara Khosrowshahi disclosed yesterday afternoon (uber.com newsroom statement) that the company suffered a breach in October 2016 affecting 57 million riders and drivers globally, and that the previous management — including former CEO Travis Kalanick and former chief security officer Joe Sullivan — chose not to disclose the breach at the time, instead paying the attackers $100,000 in exchange for a promise to delete the stolen data and signing the payment as a bug bounty. The company is now disclosing because Khosrowshahi, on taking over as CEO in late August, had the incident reviewed and reached a different conclusion about the appropriate handling. Sullivan and a member of his team have been dismissed.
The factual content of the breach itself is, on the publicly-disclosed information, comparatively straightforward. Two attackers obtained credentials to a private GitHub repository used by Uber engineers, found AWS credentials in the repository, used those credentials to access an Uber S3 bucket containing user records, and downloaded approximately 57 million records — names, email addresses, phone numbers globally, plus driver licence numbers for around 600,000 US drivers. The breach mechanism is a familiar one (credentials in source code, source code in inadequately-protected repositories, AWS credentials with insufficient scope) and the operational lesson is the same as it has been for several years.
What is unusual — and what makes this disclosure consequential beyond the breach itself — is the choice to pay the attackers and conceal the incident. The operational characterisation as a "bug bounty" is, in any meaningful sense, false. Bug-bounty programmes are pre-arranged structures in which researchers report findings to the operator under defined rules and receive defined rewards for legitimate findings. The Uber case involved attackers who had already taken data, demanded payment as ransom, and received that payment with a non-disclosure agreement. That is not a bug bounty; it is an extortion payment. Calling it a bug bounty was, on the public reporting, a deliberate framing chosen to fit the payment within an existing operational structure that did not require board notification, regulatory disclosure, or law-enforcement involvement.
The legal implications are substantial and are going to take a year or more to resolve. Forty-eight US state attorneys general have, by this morning, opened investigations or stated they will. The Federal Trade Commission has reopened a previous Uber data-protection settlement. The SEC has the option to investigate the disclosure question (Uber was, at the time, a private company, which limits the SEC's reach, but the company was in active fundraising during the concealment period). Three class-action lawsuits have been filed in the past 24 hours. The UK Information Commissioner has issued a statement and is considering action under the Data Protection Act 1998 (the GDPR-era position would be substantially worse for Uber). The European Article 29 Working Party has noted the case as an example of why the GDPR notification regime is needed.
The disclosure-ethics question is the one I want to write about properly, because the Uber case is now a worked example of what the post-Equifax, pre-GDPR landscape looks like when an organisation chooses concealment over disclosure. The decisions that produced the concealment were not, on the publicly-reported facts, accidents — they were considered choices made by senior executives with awareness of the legal exposure and the public-trust implications. The choice was apparently judged to be commercially preferable to disclosure. That judgment was, in the post-Khosrowshahi review, reversed; the cost of the reversal — to Uber's reputation, to the affected user populations who could have been notified a year earlier and protected themselves accordingly, to the wider trust in the disclosure ecosystem — is now being paid.
For the vCISO portfolio, the case will be a useful reference point in the GDPR-readiness conversations through the autumn and winter. The 72-hour notification requirement under GDPR Article 33 is, in part, a response to incidents like the Uber one — the requirement creates an external constraint that takes the disclosure decision out of the hands of executives who might, in the moment, prefer concealment. The customer-organisation conversations about who has authority to declare a breach, what the notification timeline is, and how the legal-PR-technical coordination works are being shaped this autumn by the Uber disclosure even where the customer organisation has no operational similarity to Uber. The case is, in a structural sense, a worked example of the failure mode that GDPR Article 33 is designed to prevent.
The personal-ethics question is harder and is for a longer piece. Joe Sullivan is, by any measure, an experienced and well-regarded chief security officer — his prior work at Facebook and elsewhere is well-known, and the security community has worked with him for years. The choices he made on the Uber case are consequential, and the response from the security profession is, in the past 24 hours, already showing a sharper line on the personal-ethics question than I would have predicted in advance. Whether the long-term effect is a more honest disclosure culture across the profession or a more cautious-and-defensive culture is the question the next few years will answer.
There is more to write. I will return to this after the immediate news cycle has moved on.