The fortnight just past has been an unusually active one across the British and European privacy landscape, with movement on age verification, regulator enforcement, and continuing tremors from the European Commission's Digital Omnibus proposals. The shape of the picture is one of regulators tightening their grip on platforms whilst legislators in both jurisdictions attempt — somewhat awkwardly — to reconcile that posture with simplification rhetoric and child-safety politics. If you sit on a board, or advise one, the throughline matters more than any single headline: 2026 is not the year of new privacy law. It is the year regulators discover the powers they already had.

The Domestic Story: An Age-Verification Estate by Stealth

The most prominent domestic story is the Government's commitment, made on 27 April, to expand statutory restrictions for users under sixteen. The detail is to be settled through a consultation that opened in late May and closes on 26 May 2026. The political framing is firmly about child safety, but in operational terms it raises the prospect of social media and other online platforms being required to verify UK users' ages using government-issued identification, facial scans, or other biometric data.

The Open Rights Group{:target="_blank" rel="noopener noreferrer"} and Big Brother Watch are already mobilising on the basis that this draws the entire adult population into a biometric verification estate originally sold as a measure aimed at minors. They are not wrong to be alarmed. The privacy implications of normalising mass identity binding to ordinary browsing behaviour are not trivial, and the architecture being assembled — third-party age-assurance providers, document-binding flows, retained biometric templates — has all the hallmarks of a system whose data minimisation guarantees will deteriorate the moment it is operationally inconvenient to maintain them. Boards whose business models depend on direct-to-consumer digital channels should be asking, today, what their UX and KYC stacks look like if age assurance becomes the default for any service with user-generated content or community features.

Ofcom Has Grown Teeth

Sitting underneath this is the broader trajectory of Ofcom's Online Safety Act enforcement, which has now matured well beyond the symbolic stage. As of February 2026, Ofcom has opened investigations into more than ninety online services and issued six fines for non-compliance. The most recent of those was an £800,000 penalty against Kick Online Entertainment for failing to put age checks in place to prevent children accessing pornographic content. On 19 March 2026, Ofcom fined 4chan £450,000 for failing to implement effective age checks, with additional penalties of £50,000 for not properly assessing the risk of illegal material and £20,000 for failing to set out in its terms of service how it protects users from criminal content.

What is notable in the more recent enforcement is the extension into generative AI services. The formal investigation opened earlier this year into X's Grok chatbot and an AI service called Joi.com signals that Ofcom is not confining its attention to traditional adult-content sites. Any organisation deploying a customer-facing conversational interface — whether marketed as an AI assistant, a support bot, or a "companion" — now sits inside a regulatory perimeter that did not, in any meaningful sense, exist eighteen months ago. The risk assessment obligations are not hypothetical. They are enforceable, and they are being enforced.

The Fundraising Regulator's Quiet Tuesday

On 6 May, the Fundraising Regulator quietly issued updated data-privacy guidance for charities, principally to align practice with the Data (Use and Access) Act 2025. The guidance addresses the new "soft opt-in" route for charitable marketing communications and — more importantly for any virtual CISO programme — reminds organisations of the 19 June 2026 deadline by which all data controllers must have a formal complaints handling procedure in place. The ICO has been pointed about this. It is a discrete compliance task, it is easily overlooked, and any vCISO engagement with charity-adjacent clients should be putting a checkpoint in the diary for early June.

The reason this matters beyond charities is that the complaints-handling requirement applies to every data controller in scope of the UK GDPR. If you sit on the audit and risk committee of any organisation processing personal data — which is to say, every organisation — you should have already received written assurance that the process exists, that it is documented, that it is owned, and that it has been tested against at least one walk-through scenario. If you have not, that is the next paper you commission. I cover the framework in more detail under privacy advisory engagements, but the substance is straightforward and the deadline is fixed.

Brussels: Meta, the DSA, and a Continental Age-Assurance Layer

In parallel, the European Commission has issued preliminary findings against Meta under the Digital Services Act for inadequate enforcement of its own minimum age rules. The Commission concluded that roughly ten to twelve per cent of children under thirteen were using Instagram and Facebook, that under-thirteens could create accounts using falsified birth dates, and that Meta's tools for reporting underage users were considered overly complex and ineffective.

The interesting feature is not the finding itself — which surprises no-one — but what it points toward. The Commission is openly pushing for an EU-wide age-verification application by the end of 2026, which would in turn be plugged into the eIDAS 2.0 digital identity wallet framework. The trajectory is unmistakable: a continental age-assurance layer that sits adjacent to, but is technically distinct from, the British model. For any group operating across the Channel, that is two parallel verification estates to engineer for, two sets of provider relationships, and two regulatory channels through which a failure will be enforced. The risk is no longer "do we comply with GDPR" but "can our identity architecture survive operating in jurisdictions whose age-assurance philosophies are diverging in real time".

noyb, PimEyes, and the Extraterritoriality Test

Also at European level, noyb has filed a complaint against the Hamburg data protection authority over its inaction regarding PimEyes — the facial recognition search engine that scrapes the open web for biometric matches. noyb argues that regulators must enforce the GDPR's extraterritorial reach rather than halt investigations because the operator is now domiciled in Dubai. The substantive point — that biometric processing of European data subjects cannot be insulated from the GDPR simply by the operator changing jurisdiction — is one to watch closely. The outcome will shape how regulators approach a great many AI-driven scraping services over the coming years, and any organisation whose business model depends on web-scraped training data or biometric inference at scale should be reading the eventual decision the day it lands.

The Digital Omnibus: Simplification, or Quiet Retreat?

The Digital Omnibus continues its passage through the ordinary EU legislative procedure, and the commentary in the last fortnight has, if anything, sharpened. The European Data Protection Board and European Data Protection Supervisor, in their Joint Opinion 2/2026{:target="_blank" rel="noopener noreferrer"}, expressed "significant concerns regarding certain proposed changes to the definition of personal data and the possible use of implementing acts to define the effects of pseudonymisation". They support more modest changes — including a tighter definition of scientific research and a new exemption for biometric special-category data in that narrow context — but the proposed shift from an objective to a subjective definition of personal data is being fought hard.

This is the element to watch. Under the current proposals, the GDPR's applicability would, in part, depend on the controller's own assertions about identifiability. If that lands anything like its current shape, it will measurably reduce the operational scope of GDPR obligations for organisations holding pseudonymised data — with knock-on consequences for breach reporting thresholds and timeframes that boards and audit committees should already be modelling. The instinct in some quarters will be to view this as a welcome relaxation. The reality, in my view, is that any framework which makes the definition of "personal data" controller-determined is one that will be tested in court, repeatedly, and the cost of being on the wrong side of that test will exceed by orders of magnitude the operational savings of the more permissive reading.

Coordinated Enforcement and the Article 12–14 Sweep

Two further developments sit just behind the headlines but matter operationally.

The first is that the EDPB has launched its 2026 coordinated enforcement action focused specifically on GDPR transparency obligations — twenty-five European authorities reviewing whether controllers actually inform individuals clearly about processing under Articles 12 to 14. This is not a UK action, but any group entity operating in an EU member state, or providing services to EU data subjects, falls within its envelope. A quiet review of privacy notices, recruitment notices, and cookie journeys before regulators come knocking would be a sensible exercise. I would frame this as a single morning of work for an in-house legal team, and a single afternoon for a competent external reviewer. The disproportion between the cost of doing it and the cost of being caught not having done it remains striking.

The second is that the Investigatory Powers Tribunal hearing concerning the Home Office's Technical Capability Notice against Apple — the order that, in effect, sought to compromise end-to-end encryption for global users — is now scheduled for seven days of open proceedings on assumed facts, with Privacy International, Liberty, WhatsApp, and the two named individual claimants in attendance. The hearing represents the first occasion on which the substantive lawfulness of the order will be examined in the open, even if the Government persists in its formal "neither confirm nor deny" stance. For anyone advising boards on encryption posture, key management strategy, or the resilience of consumer-grade end-to-end protections, this is the case to be reading. I have written previously on why encryption is not a feature but an architectural choice, and the IPT hearing will, one way or the other, sharpen that view.

What This Means in the Boardroom

The thread running through all of this is that 2026 is the year of phased enforcement. The ICO's new investigatory powers under the DUAA — the compulsory interview notice, the power to require independent technical reports, and the alignment of PECR fines with UK GDPR ceilings — are now live and will increasingly be exercised in cases where governance, security architecture, or incident response is found to be systematically weak rather than merely unlucky. The Capita pattern, where the original £45 million figure was reduced to £14 million through early settlement, suggests the ICO is formalising a settlement-and-cooperation model that closely mirrors the approach of the financial conduct regulators. Cooperate early and you get credit. Cooperate late and you do not.

For non-executive directors, that translates into three practical questions for the next board meeting:

  • What is our age-assurance exposure? Not "do we have age gating", but "if statutory age verification is required across our consumer touchpoints within twelve months, what does our data architecture look like, and who carries the liability for biometric processing in our supply chain?"
  • Have we completed the 19 June complaints-handling work? A simple, documented, owned, tested process. Five paragraphs of policy and one named individual.
  • Where does our cooperation posture sit? If the ICO walks in tomorrow with a compulsory interview notice and a request for independent technical reports, what is the conversation? Is there a panel of pre-engaged independent assessors? Is there a documented incident-response playbook that legal has signed off on? If the answer is "we'd have to commission that on the day", the answer is wrong.

The conversation with boards is no longer about whether a regulator will take action. It is about how much credit an organisation will receive for having acted before one did. That is a meaningfully different conversation, and it is one I am having more often, in virtual CISO engagements and on non-executive director seats, every quarter.

Phased enforcement is the dull, slow, structural change in our regulatory environment. It will not produce many headlines. It will, over the next eighteen months, produce a great many fines.


Privacy Tuesdays is a fortnightly note from Peter Bassill on the state of privacy, encryption and AI governance for directors and boards. Subscribe by adding the feed, or follow on the about page.


Back to all writing