NHS in England to trial new approach to AI biases in healthcare

  • 8 February 2022
NHS in England to trial new approach to AI biases in healthcare

The NHS in England is to trial a new approach to the ethical adoption of artificial intelligence (AI) in healthcare with the aim of eradicating biases.

Designed by the Ada Lovelace Institute, the Algorithmic Impact Assessment (AIA) will mean researchers and developers will have to assess the possible risks and biases of AI systems to patients and the public before they can access NHS data.

Part of the trial will also involve researchers and developers being encouraged to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns.

It is hoped this will lead to improvements in patient experience and the clinical integration of AI.

It is also anticipated that in the future, AIA could increase the transparency, accountability and legitimacy for the use of AI in healthcare.

Octavia Reeve, interim lead at the Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.

“We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.”

The Algorithmic Impact Assessment complements ongoing work from the ethics team at the NHS AI Lab on ensuring datasets for training and testing AI systems are diverse and inclusive. The lab was first announced in 2019 with the government pledging £25million to improve diagnostics and screening in the NHS.

Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, added: “Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.

“The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

Digital Health Unplugged: Darzi report news team debrief

Digital Health Unplugged: Darzi report news team debrief

The Digital Health news team gather to dissect the digital and technology aspects of Lord Darzi’s report on the state of the NHS in England.
Funding announced to boost development of health tech for cancer

Funding announced to boost development of health tech for cancer

New medical technologies to diagnose cancer, such as scanners and AI models, will be trialled in the UK following new government funding.
AI platform for arthritis detection secures £1.2m grant

AI platform for arthritis detection secures £1.2m grant

Henley Business School has been awarded an £1.2m grant to develop an AI system for early detection of rheumatic and musculoskeletal diseases.