A Cyber and AI Non-Executive Director's work in a board meeting is mostly invisible to anyone watching it. The visible part is asking three questions and listening carefully to the answers. The invisible part — the eight hours of pre-reading, the standing one-to-ones with the CISO and the head of AI between meetings, the half-page of private notes after — is where the role earns its fee. Boards that confuse the visible part for the whole part are paying for theatre. This is what the rest of it actually looks like.
The phone call comes in on a Tuesday afternoon. The board meeting is on Friday. The board pack lands in my inbox at 4 p.m., 230 pages, encrypted attachment, decryption key sent separately by the company secretary because — bless her — she has read the same phishing-awareness training I have.
What follows is a faithful account of what a specialist cyber-and-AI NED actually does between Tuesday afternoon and the end of the Friday meeting. Names, sectors, and identifying details are changed. The pattern is real.
Tuesday — the read
I block four hours and read the whole pack before I touch the cyber and AI sections. The pack always tells you more than the cyber paper does. The CFO's commentary on revenue concentration tells me whether a single customer cancellation could push the company into solvency stress next quarter. The HR update tells me whether the head of engineering is about to leave. The legal report tells me whether there is litigation in the pipeline that will make a public cyber incident worse. The cyber paper, read in isolation, is always too optimistic; read in context it tells you what the executive team is choosing not to say.
By the time I get to the cyber section I am looking for three things specifically. First, what has changed since the last meeting — has a metric moved, has a vendor changed, has a control been re-classified, has a finding been quietly closed that wasn't quietly closed last quarter? Second, what is being asserted without evidence — "we are aligned to ISO 27001" without an audit cycle named, "we are SOC 2 ready" without a Type II report referenced, "we have addressed the findings" without a closed-finding register attached. Third, what is being downplayed by language — "an isolated event" usually wasn't, "a minor incident" usually had regulatory implications somebody is hoping nobody will ask about, "a low-priority finding" usually has a story.
The AI section, where there is one, gets read with a different lens. AI governance papers in 2026 are still mostly maturity self-assessments. I read them looking for the AI register: how many production AI systems, what kind of decisions they make, how recently the conformity assessments were refreshed, whether the data-protection impact assessments have been versioned along with the model, and — the thing that almost nobody puts in the paper — what the failure mode looks like for each system. An AI register without failure-mode entries is not yet an AI register; it is a list.
Wednesday and Thursday — the side-channels
Most of what I learn about a company between board meetings does not come from board papers. It comes from one-to-one calls. I have standing thirty-minute slots in my calendar with the CISO, the head of AI (where there is one separate from the CISO), and — once a quarter — the head of internal audit. The conversations are not always interesting. Sometimes they are perfunctory; the executive is busy and there is nothing burning. Sometimes they are not.
This week the CISO mentions, in passing and without being asked, that a tool I have not heard of has been added to the SOC stack since the last board. I ask what it does. The answer is, "it scores alerts using an internal language model we tuned on our own incident history." I ask whether the model's outputs are influencing prioritisation. The answer is yes. I ask whether the addition has been disclosed in the AI register. The answer is no, "because it's just internal tooling". I write this down.
The head of AI mentions, also in passing, that the credit-decisioning model went into production three weeks ago and that the conformity assessment was completed in early April. I ask whether the conformity assessment was audited externally before deployment. The answer is "we used the standard internal review process". I write this down. Both of these will become questions on Friday.
The Thursday-evening read is the second pass. I go back through the pack with my notes from the calls in mind. The cyber paper has not mentioned the SOC's new LLM-scoring tool. The AI paper has not mentioned that the credit-decisioning model went live. Either the executive does not yet know, or the executive does know and chose not to write it down. Either possibility is interesting.
Friday — the meeting
The meeting is structured the way most UK boards are structured. The CEO opens; the CFO presents the management accounts; the operational reports come round; the cyber paper has its slot; the AI paper has its slot; AOB; close. I aim to ask between three and five questions across the meeting. More than five and I am performing; fewer than three and I am rubber-stamping.
The first question — directed at the CISO when the cyber paper comes round — is: "the SOC has added an internal LLM-based prioritisation tool since the last meeting. Where is that disclosed in the cyber paper, and where does it sit in the AI register?" The CISO is a competent practitioner and answers honestly: it isn't in the paper because they didn't think internal tooling was material; it isn't in the AI register because the head of AI hasn't seen it yet. I do not press further in the room. The chair makes a note. The action item — "AI register to include all internally-developed models scoring or prioritising any production decision, including security operations" — gets minuted by the company secretary. That is the work done for the quarter, on that single item.
The second question — directed at the head of AI when the AI paper comes round — is: "the credit-decisioning model went into production on the 12th of April. Was the section 26 conformity assessment under the EU AI Act audited externally before deployment, and if so by whom?" The head of AI says no, the assessment was internal, and points out that the company is below the threshold for mandatory external audit. The CFO, sitting opposite, frowns visibly — she has not been involved in the decision and the credit-decisioning model affects the impairment-provision model, which is her territory. The chair, who has been doing this for thirty years and recognises the frown for what it is, allocates the topic to the audit committee for review at the next meeting. Action item minuted; conversation moves on. Twelve seconds of board time; an audit-committee deep-dive consequent on it. This is how the role works in practice.
The third question — directed at no-one in particular, addressed to the room — is asked at AOB: "looking at the operational risk register I do not see anything that explicitly addresses the supply-chain risk from our AI-tooling vendors. Is that an oversight in the register or is it an oversight in the assessment?" The CTO answers honestly that it is the latter; the register has not been updated since the AI tooling stack was rebuilt last summer. The chair allocates a piece of work; the CTO accepts. Risk register update committed.
By the end of the meeting I have asked three questions and watched the board minute three pieces of work that would not have been minuted otherwise. The cost of those three pieces of work, in NED time, is roughly two hours of pre-reading and three minutes of speaking. That is the real economy of the role.
Friday afternoon — the writeup
Within two hours of the meeting I write a one-page private note to the chair. The note is not minuted, not circulated, and not read by anyone else. It contains my honest read of where the executive team is strong, where the executive team is being optimistic, and what I think the chair should be watching for in the next thirty days. The chair has the same kind of note from each of the other NEDs. The note is read once, then deleted from the chair's mailbox; my own copy is destroyed at the end of the term. None of it appears in the board minutes.
This is the thing nobody tells you about the role. The minutes are the public record; the questions are what the company secretary remembers; the actions are what the executive will be asked about next quarter. But the one-page private note is what gives the chair the second opinion they cannot otherwise get. A good NED writes a useful one. A great NED writes one that the chair quietly comes to depend on.
What is missing from this account
I have left out the bits a public blog post should leave out. The actual companies. The actual conversations. The actual incidents that became public in the months that followed and the ones that didn't. The names of the executives who were and weren't ready for the questions they got asked. None of those things are mine to share, and they are the part of the work that matters most — the bit that earns the next renewal.
What I can share is the structure. A specialist NED is paid to read carefully, ask sparingly, and write privately. Boards that mistake the visible part for the whole role are paying for the wrong thing.
If you are reading this as the chair of a UK or EU/EEA board considering whether you need a Cyber and AI NED, the category-defining essay goes deeper on what the discipline is. If you are weighing this against a fractional CISO, the comparison is on a separate page. The engagement process is straightforward, the fees are public, and the first call is free.