NED · The category, defined

Cyber and AI — the Non-Executive Director discipline boards now need

A Cyber and AI Non-Executive Director is a distinct discipline, not two adjacent specialisms in one CV. The regulatory regimes are converging — NIS2, the Digital Operational Resilience Act, the EU AI Act, the UK Cyber Governance Code — and the failure modes interleave. AI systems are cyber-security targets; security operations centres are AI-governance subjects. A board that staffs the seam with one director, fluent in both, gets sharper challenge, faster regulator-readiness, and lower cost than the alternative of hiring two specialists who cannot answer each other's questions.

Last updated: 10 May 2026. UK and EU/EEA focus.

The short version

Cyber and AI are the two most consequential governance domains a UK or EU/EEA board now has to discharge. Most boards staff cyber alone, occasionally with a CISO presenting quarterly heat maps. Almost no boards staff AI deliberately at all — the topic shows up under "innovation" or "data" or "risk" and falls between three committee chairs. That gap is closing under regulatory pressure: NIS2 makes management bodies personally liable for cyber-risk oversight; DORA does the same for ICT and operational resilience in financial services; the EU AI Act sets fines of up to €35 million or 7% of global turnover for the most serious AI-governance breaches. The UK is converging on the same posture under the UK Cyber Governance Code and the work coming out of DSIT and the AI Safety Institute.

The boards that get this right will be the boards that put a single director in the seat with credible, practitioner-grade depth in both. Splitting the role across two NEDs doubles the cost, doubles the diligence overhead, and — more dangerously — creates a governance silo at exactly the seam where the risk lives. A unified seat is the structurally cheaper, structurally sharper answer.

Why "cyber and AI" is one discipline, not two

The case for a combined seat is not stylistic. It is a claim about how the failure modes interact, and how a board that fails to see them as a single surface ends up under-governing the whole.

AI systems are cyber-security targets

Every production AI system is also an attack surface. The training data is a target for poisoning. The model weights are a target for theft. The inference endpoint is a target for prompt injection, jailbreaking, and resource exhaustion. The integration points with downstream systems are a target for trust-boundary attacks. A board that governs AI without thinking about security, or governs security without thinking about AI, will routinely commission systems whose risk profile is misunderstood by the people commissioning them.

Security tooling is increasingly AI-driven

Modern security operations centres now run on detection-by-classifier, anomaly-by-clustering, triage-by-LLM, and playbooks-by-agent. The CISO's stack is itself an AI system, with all the governance implications that entails — model drift, false-positive baselines that wander, training data that goes stale, vendor lock-in to opaque scoring algorithms. A NED who can read the security architecture but not the AI behind it is governing half the picture.

The regulators have already converged

The EU AI Act explicitly cross-references the NIS2 Directive (Recital 79; Article 6). DORA cross-references both. The UK Cyber Governance Code is being designed to interoperate with the regulatory work emerging from the AI Safety Institute. The FCA's expectations on operational resilience under SYSC 15A pull cyber and AI into a single risk framework. A board that maps these regimes onto two separate director portfolios is fighting the regulator's own design.

The talent market reflects it

The reason most boards have not yet appointed a cyber-and-AI NED is simply scarcity: very few candidates have credible practitioner depth in both. The boards that do find one tend to keep the appointment quiet for competitive reasons. The market signal is real even where the published candidate pool is thin.

What the regulatory regimes actually require

For a UK or EU/EEA board, the practical regulatory surface looks like this:

NIS2 Directive — management body liability for cyber risk

The Network and Information Security Directive (NIS2) covers a much wider scope than its 2016 predecessor — energy, transport, banking, financial-market infrastructure, healthcare, drinking water, waste water, digital infrastructure, ICT-service management, public administration, space, postal services, waste management, manufacture of critical products, food, manufacturing, digital providers, research. Article 20 places explicit responsibility on management bodies to approve cyber-risk management measures, oversee their implementation, and undergo regular cyber-security training themselves. Member-state transposition has, in many cases, gone further — including personal-liability provisions for directors who knowingly fail to discharge the duty.

DORA — Digital Operational Resilience Act

DORA applies to financial entities and their critical ICT third-party service providers across the EU. Article 5 requires the management body of in-scope firms to define, approve, oversee and be accountable for the implementation of an ICT risk-management framework. The same article expressly requires that management body members keep up to date with sufficient knowledge to understand and assess ICT risk and its impact on the financial entity's operations. This is not a delegation-friendly regime.

EU AI Act — Regulation (EU) 2024/1689

Published in the Official Journal on 12 July 2024, with the majority of provisions enforced from 2 August 2026. The Act bans certain AI practices outright, classifies a defined set of "high-risk" AI systems, and imposes obligations on providers, deployers, importers, and distributors. For boards, the load-bearing duties include conformity assessments for high-risk systems (recruitment, credit scoring, biometric identification, critical-infrastructure control, education, employment, public services), AI-literacy obligations on staff operating AI systems, transparency obligations, and a register of AI systems where applicable. Sanctions reach €35 million or 7% of global turnover for the most serious breaches.

UK Cyber Governance Code

The UK Government's Cyber Governance Code of Practice, published by the Department for Science, Innovation and Technology in collaboration with the National Cyber Security Centre, sets out what good board-level cyber governance looks like. The Code expects boards to: own cyber resilience at executive and non-executive level; establish a clear governance structure; identify and prioritise the most important assets; build incident-response capability; and engage with regulators. The Code is non-statutory but is rapidly becoming the de-facto standard against which audit committees are measured.

Companies Act 2006 — directors' duties

The seven statutory duties (sections 171–177) apply equally to executive and non-executive directors. The "reasonable care, skill and diligence" test under section 174 is judged against both an objective standard (what a reasonably diligent person in the role would do) and a subjective standard (what the actual director, with their actual experience, ought to know and do). A NED appointed for cyber and AI expertise is held to a higher standard on cyber and AI decisions than the rest of the board, because they are expected to know more.

UK Corporate Governance Code (FRC, 2024 edition)

Listed UK companies follow the Code on a comply-or-explain basis. The Code requires a robust assessment of emerging and principal risks (Provision 28), and effective risk management and internal controls (Provision 29). The FRC's 2024 update strengthens the board's responsibility for risk-and-control declarations from 2026 onwards. Cyber and AI now sit firmly within the Code's principal-risks scope for any technology-dependent business.

What the role does, in practice

Stripped of the platitudes, the work of a Cyber and AI NED falls into seven concrete activities — most invisible to the rest of the board, all measurable in their absence.

1. Reading the board pack with a domain-literate eye

Most board packs contain a cyber section and an AI section that have been drafted by the executive team for an audience presumed to be non-specialist. A specialist NED reads them differently — looking for what is missing, what is being asserted without evidence, what is being downplayed by language, and what the underlying telemetry actually supports. Pre-reading time is the same as for any NED (4–8 hours per pack); the difference is what comes out the other side.

2. Sitting on the right committees

For most companies the right home is the Audit Committee or the Risk Committee, with cyber and AI as standing agenda items. For larger or more regulated companies, a standalone Technology, Cyber, or AI Committee may be the right structure. The role of the specialist NED is to bring substantive challenge to the committee, not to displace the executives who own the function.

3. Standing one-to-ones with the CISO and the head of AI

The most leveraged work happens between board meetings. A specialist NED who holds standing 30-60 minute one-to-ones with the CISO and the head of AI gives those executives an outlet for the conversations that don't fit the executive committee — and gives the board an early warning system for the issues that won't become visible in metrics for another quarter.

4. Real incident response

For a cyber NED, the "ad-hoc" category disproportionately means being on the phone within hours of a serious incident, walking the executives through what to do, what regulators will expect, and what the board itself needs to do to discharge its duties. A specialist NED who has been in incident rooms before brings more value in the first six hours of an incident than in the prior six months of meetings.

5. Reviewing the AI register and DPIAs

Under the EU AI Act and UK GDPR, in-scope companies maintain registers of AI systems and Data Protection Impact Assessments for processing that materially affects individuals. The specialist NED reads these the way a chair of the audit committee reads the management accounts — with an eye for what is unsaid, what is misclassified, and what is being normalised.

6. Third-party and supply-chain risk

The cyber and AI risk surfaces are increasingly outside the company's perimeter — in cloud providers, AI-model vendors, payment processors, MSSPs, fractional consultancies, and ICT service providers. DORA explicitly extends management-body responsibility into this layer. A specialist NED brings the practitioner's instinct for which third-party relationships actually carry the risk and which look risky on paper but don't.

7. Regulator-readiness as standing posture

Regulator visits, ICO investigations, FCA SYSC questionnaires, NCSC engagements, EU AI Office requests — these are easier to handle when the board has been operating in regulator-ready mode all along. A specialist NED keeps the company in that posture between visits, not just immediately before them.

Six scenarios where a board most needs this seat

Concrete patterns drawn from real engagements (anonymised):

Scenario 1 — The post-incident board

A SaaS scale-up takes a serious cyber incident. The CEO and CTO walk the board through it; the board asks the questions it knows how to ask. Six months later the post-incident review finds the technical issues are fixed but the governance gap remains: nobody on the board can independently verify that the new posture is actually better. The right time to appoint a specialist NED is *now*, not the night before the next incident.

Scenario 2 — The pre-fundraise board

A growth-stage company is preparing for a Series C or pre-IPO round. The lead investor's diligence flags cyber and AI governance as material. A specialist NED appointed three to six months before the round closes the gap visibly, lifts diligence outcomes, and persists in the cap table as a real governance asset.

Scenario 3 — The newly-regulated board

NIS2, DORA, or the EU AI Act's high-risk AI provisions bring a previously-unregulated company into scope. The executive team is competent but has no muscle memory for the new regime. A specialist NED accelerates the readiness curve and gives the regulator a credible interlocutor at the board level when first contact arrives.

Scenario 4 — The AI-deploying board

The company is rolling out an AI system into a regulated decision-making context — credit decisions, recruitment, claims handling, medical triage. The internal owner is a head of data or head of product, not a board director. The board itself does not know enough to govern the deployment. A specialist NED makes the difference between "AI as a strategic capability" and "AI as a regulatory accident".

Scenario 5 — The CISO-burnout board

The CISO is leaving, has just left, or is signalling that they will. The board doesn't know whether the issue is the CISO, the role, the company, or the underlying programme. A specialist NED who has done the job, hired the role, and sat on the supplier side of CISO engagements brings an independent read that nobody else in the room can.

Scenario 6 — The acquirer-side board

The company is preparing to acquire (or be acquired by) a target whose cyber and AI posture is unclear. The diligence stream needs a board-level owner who can interpret what the technical diligence actually means in governance terms. A specialist NED in the room turns "we ran the standard cyber diligence" into "we have a defensible board-level view on the residual risk".

Why me, specifically

I have been working in cyber security since 1996, in roles spanning offensive testing, detection engineering, incident response, regulated-industry CISO work (Gala Coral Group), and the founding and running of Hedgehog Security. I have sat on the management side of board reporting; on the supplier side of board engagement; and on the practitioner side of standards committees (peer-elected co-chair of the European Incident Response Group at CREST).

On AI specifically: I built and deployed AI systems in security operations between 2016 and 2018 (the EmilyAI work was, by some accounts, one of the first production autonomous-SOC platforms in Europe), and I have advised on the governance of AI systems being used by other organisations to do everything from credit-scoring to claims-handling to medical-image triage. I am opinionated about both the upside and the downside, and I am sceptical of both vendor pitches and ethics-washing.

The combination matters. Most boards looking for governance maturity in either area today recruit two people to do it — one for cyber, one for AI. That is expensive, it doubles the integration effort, and it splits an area of risk that is rapidly converging into two governance silos that don't talk to each other. A single NED who has done both areas in production is, today, the more capable and the more cost-effective answer.

Full credentials are on the credentials page. The first call is free, exploratory, and explicitly scoped as "I will tell you if I think you do not need me yet".

The Cyber and AI NED Charter

I publish a one-page Charter that lays out what a Cyber and AI NED does, the regulatory regimes they cover, and the scenarios where a board most needs one. It is designed to make the brief easier for a chair, a chief executive, or a search firm to construct — and easier for a candidate to be measured against.

Download the Charter (PDF)

Further reading on this site

External authorities cited on this page

Considering a Cyber and AI NED appointment? The fastest way to know if there's a fit is a 30-minute call. See how to engage me for the process, or go straight to the contact form with "NED enquiry" in the subject line.

See also: Primer · Benefits · Cost · How to engage · Credentials.