Line ‘needs to be drawn’ to establish accountability in healthcare AI, report warns

  • 20 February 2019
Line ‘needs to be drawn’ to establish accountability in healthcare AI, report warns

Artificial intelligence is “rapidly developing” and a line “needs to be drawn” to establish accountability between clinicians and technology, a report has warned.

Clinicians may find themselves incorrectly trusting decisions made by AI more than they trust their own, The Academy of Medical Royal Colleges report found.

The organisation called for greater recognition of accountability for harm caused by faulty content or by incorrect operation.

“Technology companies are currently focusing on AI that will support clinicians, rather than replace clinical judgement – implying that accountability for mistakes remains with the clinician,” the report said.

“But a line needs to be drawn between accountability for content and for operation. A clinician might be accountable for not using an algorithm or device correctly, but in the event of harm being caused by incorrect content rather than improper use, then the accountability must lie with those who designed and then quality assured it.”

The report, commissioned by NHS Digital, also warned AI isn’t going to solve all the problems facing the healthcare sector.

It looked at the clinical, ethical and practical concerns surrounding AI in the health and social care system in the UK, both for future and current AI systems – with mention of Babylon Health and Google’s DeepMind.

Inadequate input of data would lead to “inappropriate” results in AI systems, the report said, but it’s not clear whether the NHS or technology companies would take responsibility for any mistakes made.

The rise of AI may also lead to greater workloads for accountable officers and chief information officers as the need to establish accountability increases.

Dr Mobasher Butt, Babylon Health’s chief medical officer, said the AI offered by the health tech company is regularly reviewed by their clinical team to ensure quality and accountability.

“As part of our governance processes, we have a Clinical AI Governance Committee that meets regularly to review relevant cases and to regularly audit for quality assurance while all work carried out by our people is frequently peer reviewed by our clinical team,” he said.

“Additionally, as we design out our AI healthcare solutions, we ensure that everything is fully interpretable, allowing us to go back and review any cases where an incident of any sort may have occurred.”

The risk of bias is also raised in the report, with researchers cautioning AI can “learn the wrong values” and could even amplify human prejudices.

“If the training data isn’t representative, or the goals inappropriately chosen, then the resulting AI tool could be deeply inequitable,” the report stated.

To avoid discrepancies like this Babylon uses so-called generative machine learning models, which can learn from data and be constructed using prior knowledge from experts, Dr Butt said.

But he cautioned the challenge of bias is not unique to AI.

“Human doctors can unconsciously incorporate bias into their everyday clinical decision-making, he added.

“The use of generative model, which carefully encode principled causal assumptions (encoded in prior knowledge), ensures that our assumptions are defined openly.

“This provides interpretability to our models, thereby facilitating interrogation of our model where necessary.”

Google Deepmind declined to comment.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

AI tool could predict type 2 diabetes 10 years in advance

AI tool could predict type 2 diabetes 10 years in advance

Researchers at Imperial College have developed an AI tool which could identify people at risk of type 2 diabetes 10 years in advance.
techUK calls for ring-fenced funding for digital health

techUK calls for ring-fenced funding for digital health

The UK government and NHS England should ring-fence funding for digital transformation in health and social care says a techUK report.
Charlotte Refsum to sit on data and tech group for 10 year plan

Charlotte Refsum to sit on data and tech group for 10 year plan

Charlotte Refsum, director of health policy at the Tony Blair Institute will sit on the data and tech group for the 10 year health plan.