A hospital corridor with a window running its length. Clinicians are perched along the windowsill, reading notes on paper and tablets.

AI in the NHS: Governance, Not Hype, Will Determine Its Future

— By Dean Mawson, Founder and Clinical Director, DPM.

Artificial intelligence is no longer theoretical in the NHS. It is already embedded in everyday workflows — used to summarise clinic letters, assist triage models, support diagnostics, and streamline administration.

But when I ask frontline staff what they think about AI, they rarely bring up technical performance, instead focusing on their primary concern: responsibility.

There is still uncertainty around who is accountable for AI-generated outputs, what happens when outputs are wrong, what processes are in place to catch errors, and where liability sits when clinical decisions are influenced by machine-generated advice

It is easy to write these concerns off as signs of resistance to change — and we often read about technical “laggards” in a negative light. But in reality, they are indicators of professional maturity, demonstrating a deep — and right — reflection on clinical safety, process, and responsibility.

Healthcare does not adopt technology on the basis of novelty. It adopts it on the basis of safety, accountability, and control.

And that is exactly where the challenge lies.

One thing I hear over and over again from clinicians is how AI is “used more than we think.” 

Ambient note takers, diagnostic support, risk stratification within the EPR – it’s all becoming widespread and accepted as part of the everyday. And we know that “shadow AI” – AI tools being used without proper organisational oversight – are widespread, too. Clinicians can — and do — download AI tools, sign up to them with a free trial, and use them. Of course, this circumvents proper governance and information governance protocols, and has the potential to cause harm — sometimes without the clinician even realising.

Sanctioned or not, the point is that as AI becomes more embedded within workflows and systems, it is fading into the background, hidden away as infrastructure, and going increasingly unnoticed. It is in effect, becoming invisible.

For many NHS professionals, this invisibility is unsettling — and rightly so. Unlike traditional digital systems, AI systems may learn, adapt, or behave probabilistically. Outputs may appear authoritative while still being fallible. Incorrect phrasing, misclassification, or bias may not be immediately obvious.

The concern is not so much that AI makes errors, but about whether organisations are equipped to anticipate, detect, and manage those errors systematically.

Human Behaviour is a Risk Multiplier

Two well-recognised dynamics are already shaping clinical interaction with AI:

Automation bias — the tendency to over-trust automated outputs, particularly when presented with confidence.

Cognitive offloading — reliance on systems to perform analytical tasks, which may improve efficiency but can erode independent expertise over time.

These are not speculative risks; they are well documented, and measurable. Only a few months ago for example, a study published in the Lancet found that after just three months of using an AI-supported adenoma detection tool, clinicians’ independent accuracy declined significantly. When the system was unavailable, more diagnoses were missed than would have been before.

Technology changes behaviour. Governance must therefore address human interaction with AI — not just technical validation.

The NHS Has An AI-Governance Capability Problem

The NHS has robust mechanisms for managing digital clinical safety. Safety cases, hazard logs, clinical risk management plans, and post-deployment monitoring are standard practice. 

But adoption is inconsistent. In fact, 70% of digital health technologies in secondary care have been deployed without robust evidence of safety assurance. AI magnifies these existing inconsistencies in digital safety governance. 

It introduces:

  • Data drift and performance degradation
  • Bias across populations
  • Reduced explainability in complex models
  • Systems that evolve over time
  • Expanded data protection and cybersecurity exposure

These problems require ongoing organisational capability to solve. Structured governance aligned to emerging standards like ISO 42001, BS 30440, and the NHS AI and data assurance frameworks, will help prevent AI deployment becoming fragmented and inconsistent across organisations.

The limiting factor is whether organisations are equipped to manage it safely at scale — not whether the technology works.

Governance Must Move From Concept to Operational Practice

AI will not replace clinicians, nor will it magically transform healthcare.

It will, however, reshape workflows, decision support, and operational efficiency. To do this safely, governance must be built, practised, and sustained.

All healthcare organisations understand that AI needs oversight, but far fewer have the structures, skills, and leadership confidence required to implement that oversight consistently.

Effective AI governance is an organisational function, and it requires several factors to make it a reality:

  • Clearly defined intended use 
  • Clear accountability and decision-making authority
  • Embedded risk assessment within clinical workflows, including explicit identification of AI-specific hazards
  • Ongoing bias and performance monitoring mechanisms
  • Staff who understand both technical and clinical implications
  • Leadership capable of balancing innovation with safety

These are extensions of established patient safety doctrine. Clinical risk management needs to evolve to incorporate AI characteristics, but its foundational principles remain unchanged: foresee hazards, mitigate risk, monitor performance, and maintain accountability.

To Succeed, Organisations Must Build Capability Early

AI will continue to expand across healthcare. That is not in question.

The real dividing line will be between organisations that deploy AI and those that control it. They must be willing and able to treat AI as a clinical system subject to the same rigour as any other intervention.

Those that invest early in governance capability — training, structured risk management, operational frameworks, and clinical safety leadership — will be able to adopt AI confidently, safely, and at pace. They will also feel the benefits sooner.

Those that do not will face slower adoption, higher risk exposure, and increasing regulatory scrutiny.

But the future of AI tools in the NHS will be determined not only by whether organisations develop the practical capability to govern them, but also by the strength of national governance frameworks, and the flexibility to the changing landscape.

The work to develop both must start now.


Many healthcare organisations are deploying AI faster than they can safely govern it. And when something goes wrong, responsibility still sits with named individuals – like Clinical Safety Officers (CSOs).

To help address the gap, DPM has launched two one-day training courses designed specifically for professionals responsible for safety-critical systems. Each course provides practical frameworks and tools to apply safety thinking to AI with confidence.

  1. AI Governance
    Focuses on organisational control and accountability – equipping participants to oversee AI safely and responsibly in real healthcare environments.
  2. AI Risk Management
    Covers technical safety assessment and hazard control – building the specialist skills needed to assess AI.

Both courses are led DPM’s Founder and Clinical Director, Dean Mawson. He has 30+ years of experience in frontline care, healthtech project delivery, and clinical safety leadership – the ideal leader to support organisations across the NHS and digital health sector with their AI capabilities through a safety lens.

Register your interest for early access: https://wkf.ms/4ajTEWJ

Read next