Four projects have received a share of £1.4million to use artificial intelligence to address racial and ethical health inequalities.

The funding, a joint programme with the NHSX AI Lab and the Health Foundation, aims to ensure healthcare solutions don’t “exacerbate existing health inequalities”.

The four projects range from using artificial intelligence (AI) to investigate disparities in maternal health outcomes, to developing standards and guidance to ensure that datasets for training and testing AI systems are inclusive and generalisable.

Dr Indra Joshi, director of the AI Lab at NHSX, said: “As we strive to ensure NHS patients are amongst the first in the world to benefit from leading AI, we also have a responsibility to ensure those technologies don’t exacerbate existing health inequalities.

“These projects will ensure the NHS can deploy safe and ethical artificial intelligence tools that meet the needs of minority communities and help our workforce deliver patient-centred and inclusive care to all.”

Speaking exclusively to The Guardian today (October 20) health secretary Sajid Javid said he was committed to “removing barriers” in the NHS.

“As the first health and social care secretary from an ethnic minority background, I care deeply about tackling the disparities which exist within the healthcare system. As we recover from the pandemic we have an opportunity for change, to level up, and ensure our NHS is meeting the needs of everyone,” he said.

Experts have warned that BAME people often have poorer health outcomes than the wider population. The Covid-19 pandemic highlighted these disparities, taking a disproportionate toll on these communities.

Early in the pandemic the government was heavily criticised for failing to adequately collect BAME data relating to Covid.

Following a rapid review of how gender and ethnicity impact health outcomes for Covid-19, Public Health England (now the UK Health Security Agency and Officer for Health Improvement and Disparities) called for collection and recording of ethnicity data to be mandated as part of routine NHS and social care data collection.

The review found people of Bangladeshi ethnicity had around twice the risk of death from Covid compared to white British people.

Other BAME groups were disproportionately affected, with people of Chinese, Indian, Pakistani, other Asian, Caribbean and other black ethnicity having a 10-50% higher risk of death from the virus compared to white people.

The new initiative between NHSX and the Health Foundation is also exploring how to address algorithmic risks, in partnership with the Ada Lovelace institute.

Brhmie Balaram, head of AI research and ethics at NHSX, said: “Artificial intelligence has the potential to revolutionise care for patients, and we are committed to ensuring that this potential is realised for all patients by accounting for the health needs of diverse communities.”

Josh Keith, senior fellow at the Health Foundation, added: “Data-driven technology is having a profound impact on our health and health care system, but we need to focus on making sure the impacts are positive, so that everyone’s health and care benefits.

“We hope the projects being supported through this partnership can make an important contribution to this – helping ensure the advancement of AI-driven technologies improves health outcomes for minority ethnic populations in the UK.”

The projects funded include:

  • University of Westminster – Aims to raise the uptake of screening for STIs/HIV among minority ethnic communities through an automated AI-driven chatbot which provides advice about sexually transmitted infections. The research will inform the development and implementation of chatbots designed for minority ethnic populations in public health more widely and within the NHS.
  • Loughborough University – Aims to use AI to improve the investigation of factors contributing to adverse maternity incidents among mothers from different ethnic groups. The research will provide a way of understanding how a range of causal factors combine, interact and lead to maternal harm, and make it easier to design interventions that are targeted and more effective for these groups.
  • St George’s, University of London and Moorfields Eye Hospital – Aims to ensure that AI technologies that detect diabetic retinopathy work for all, by validating the performance of AI retinal image analysis systems that will be used in the NHS Diabetic Eye Screening Programme (DESP) in different subgroups of the population. Alongside this, the perceptions, acceptability and expectations of health care professionals and people with diabetes will be evaluated in relation to the application of AI systems within the North East London NHS DESP. It will provide evidence of effectiveness and safety prior to potential commissioning and deployment within the NHS.
  • University Hospitals Birmingham NHS Foundation Trust – Will lead STANDING Together, an international consensus process to develop standards for datasets underpinning AI systems, to ensure they are diverse, inclusive and can support development of AI systems which work across all demographic groups. The resulting standards will help inform regulators, commissioners, policy-makers and health data institutions on whether AI systems are underpinned by datasets which represent everyone and don’t risk leaving minority groups behind.