NHS in England to trial new approach to AI biases in healthcare

  • 8 February 2022
NHS in England to trial new approach to AI biases in healthcare

The NHS in England is to trial a new approach to the ethical adoption of artificial intelligence (AI) in healthcare with the aim of eradicating biases.

Designed by the Ada Lovelace Institute, the Algorithmic Impact Assessment (AIA) will mean researchers and developers will have to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data.

Part of the trial will also involve researchers and developers being encouraged to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns.

It is hoped this will lead to improvements in patient experience and the clinical integration of AI.

It is also anticipated that in the future, AIA could increase the transparency, accountability and legitimacy for the use of AI in healthcare.

Octavia Reeve, interim lead at the Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.

“We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.”

The Algorithmic Impact Assessment complements ongoing work from the ethics team at the NHS AI Lab on ensuring datasets for training and testing AI systems are diverse and inclusive. The lab was first announced in 2019 with the government pledging £25million to improve diagnostics and screening in the NHS.

Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, added: “Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.

“The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

One in five GPs using AI tools in clinical practice, finds BMJ survey

One in five GPs using AI tools in clinical practice, finds BMJ survey

An online survey of UK GPs by the BMJ has revealed that one in five are using generative AI tools such as ChatGPT in clinical…
Leeds Teaching Hospitals trials AI prostate cancer diagnosis tool

Leeds Teaching Hospitals trials AI prostate cancer diagnosis tool

Leeds Teaching Hospitals NHS Trust (LTHT) is piloting an AI tool to assess whether it can improve prostate cancer diagnosis.
NHS clinical scientist warns of AI ‘deployment blockage’

NHS clinical scientist warns of AI ‘deployment blockage’

A "deployment blockage" in the NHS is preventing AI from being adopted at scale, according to an NHS consultant clinical scientist.