AI and the potential liability issues arising from use in a clinical setting

  • 6 August 2019
AI and the potential liability issues arising from use in a clinical setting

Artificial Intelligence (AI) is intruding more and more into modern life and is seen as a tool which could transform healthcare. But what about the risks that come with it? Andrew Rankin, legal director at DAC Beachcroft LLP who specialises in technology law and related data protection matters, and Christian Carr, is a senior associate, specialising in matters related to regulation of healthcare look the potential legal issues surrounding AI in healthcare.

Artificial Intelligence, or “AI” is a branch of computer science which attempts to build machines capable of intelligent behaviour. If AI can be thought of as the science, then machine learning can be thought of as the algorithms that enable the machines to undertake certain tasks on an ‘intelligent’ basis. So the enabler for AI is machine learning.

AI has become pervasive in our lives, yet we are often blissfully unaware that it is powering a lot of things.  From your smartphone, to Google searches, to your online banking, to Facebook: you may use systems deploying machine learning many times a day without even knowing it.

Outside of consumer use, in many areas of industry, people are excited about the possible applications for AI – arguably no more so than in medicine, which promises to harness its powers to make clinical care better, faster and safer.  Although AI technology is already embedded in many forms of medical technology, its use in front line clinical practice remains limited.

Currently, the largest area of application is medical diagnosis, and is of particular use where pattern recognition (i.e. detecting meaningful relationships in a data set) takes place: for example, in radiology and pathology.  There is evidence to suggest that an AI computer programme can interpret radiography images and translate patient data into diagnostic information far faster than a human clinician, with greater accuracy.

Ultimate responsibility 

With suppliers saying that AI systems, in certain contexts, can outperform health professionals, it is easy to see why the early promises of AI revolutionising the efficiency of healthcare delivery, in a sector which in the UK has significant funding challenges, is so attractive.

However, when AI-enabled software systems take over aspects of healthcare involving a level of ‘intelligent’ assessment, it is important that the risks and corresponding potential liabilities are fully understood and managed appropriately.

When the outcome of medical treatment is not what was hoped for, patients may explore the avenues available to them for legal redress.  A question arising early on for patients with a misdiagnosis is, “who should I bring a claim against?”

As a matter of general practical application, ‘traditional’ negligence claims are almost always brought against the provider of care. The standard of care is judged against the Bolam test and will hinge on the question of whether the clinician exercised reasonable skill and care.  In general terms, health professionals will not be found negligent if they can prove they followed accepted medical practice.

The GMC says that doctors can delegate patient care to a colleague if they are satisfied that “the person providing care has the appropriate qualifications, skills and experience to provide safe care for the patient”, or will be appropriately supervised.

Responsibility for the overall care of the patient remains with the delegating doctor. It is unclear whether, over time, the use of AI-driven solutions will be treated as more akin to escalation to a senior, or referral to a specialist, rather than delegation. There is a lack of practical guidance on division of responsibilities.

The complications of AI

In addition, suppliers of AI solutions could be liable if they fail to exercise reasonable skill and care in performing their responsibilities: such as defects in AI technology performance arising from faulty implementation or software engineering design or build flaws. Liability could also arise under product liability legislation, where liability will hinge on the issue of whether the product is “defective”.

From a systems analysis perspective, human error generally poses the greatest source of risk in medical treatment, including from the incorrect use of clinical systems or their outputs. These errors can include errors in data analysis, accessing and overwriting the wrong patient record, errors in configuration, poor quality and accuracy of data capture, misinterpretation of data by clinicians and user alert fatigue.

Moreover, nearly all AI solutions currently being developed are not strictly intended to be deployed on a fully autonomous basis.  So, even if the AI solution fails, resulting in patient harm (whether through missed diagnosis or unnecessary treatment following false positive diagnosis), the clinician as the person responsible for providing care for that patient is likely to still be at risk of a negligence claim.

In such circumstances it is reasonable to assume the healthcare provider might consider bringing a separate action against the supplier of the failed AI solution.

In common with any clinical computer system, establishing liability of the supplier delivering an AI system will almost certainly be a complicated issue.  Serious untoward incidents resulting in legal claims, often present complex evidential issues.

It is frequently too difficult to apportion legal responsibility for the harm caused in a clear way to any one party.  This is usually because a claim occurs as a result of a series of connected events occurring during the course of care, rather than a single, catastrophic failure in the system of care.

Use of AI could play a key role in that chain of care.

Unknown territory and the ‘black box’ problem

As far as we are aware the issue of whether a supplier of a computer system intended for clinical use should be liable for patient harm, as a result of the supplier’s negligence, remains untested by the English courts. This is not to say, however, that clinical safety incidents or serious untoward incidents have not occurred in the context of use of clinical systems, as a result of the combination of steps undertaken in the care of a patient and use of a system.

The use of AI is further complicated because whilst the supplier of AI products could be liable in the same way as any other supplier of software, as AI products are eventually deployed on a more autonomous basis, new questions of liability will arise.

The use of AI decision making solutions could lead to what some are calling the ‘black box’ problem. Although the input data and output decision are known, the exact steps taken by the computer and software to reach the decision cannot always be fully retraced making it difficult to see why the model made a particular decision or diagnosis (although there may be evidence of the staggered validation of the software at defined learning cycles).

If the outcome of a particular decision is wrong, it may be impossible to fully reconstruct the reason a software programme reached the particular outcome.  One could argue that human decision making is equally opaque, however, after giving a diagnoses, the clinician can usually be asked to justify his/her decision whereas this is not possible with an AI tool.

What should be the response?

How should law makers and government respond to these new challenges?

Part of the answer might lie in the evolution of safety standards currently issued by NHS Digital, pursuant to the Health and Social Care Act 2012: strict technical standards should reduce risk.  There is likely to be closer working between the medical device regulator, the MHRA, and the bodies responsible for defining clinical standards. Standards could be mandated reflecting the more autonomous the learning, the greater the risk.

New insurance models for AI tools might also help to mitigate the risks of this powerful new technology. Suppliers could take out medical negligence insurance policies to protect the company against these potential risks and liabilities and specialist AI insurance products might begin to emerge to cater for this need: use of such cover could be mandated.

Should government step in?  A third party regulator could assess the risk of each AI product and then allow the developer the right to commercialise the product in return for paying an appropriate risk fee to the regulator.  The fee would then pay out of a pool in the case of a claim.

It is also possible that regulators decide  increasingly powerful AI solutions be restricted to use in effect as a computer aided detection tool, the software operating effectively as a ‘second read’ to identify the particular issue that would have otherwise been missed, in effect keeping human intelligence firmly in control of key medical assessments and decisions.

A more controversial proposal has been to suggest that autonomous AI tools should be considered to be a person and could therefore be blamed for the wrong decision.  Electronic personality would mean that software could be blamed, or held liable, for mistakes.

In October 2017, Saudi Arabia granted citizenship to a robotic AI system named ‘Sophia’, becoming the first country to give citizenship to a robot!  Arguably, this was essentially just a publicity stunt, with few legal ramifications, but nevertheless, it gives new debate around legal status of AI systems.

As far as we are aware, no legal system as yet formally recognises legal personality in AI, although we believe that this is under serious consideration by several states.  In February 2017, the European Parliament asked its commission on civil law rules on robotics to understand and study the possibility of creating specific legal status for robots.

We think that in the short term suppliers of systems, increasingly powered by AI, will look at ways of minimising their potential liability by requiring that clinicians continue to have the key decision making role when using their machines.  The difficulties around ensuring that liability falls where it should are going to become increasingly prevalent where such systems are used.

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

AI tool could predict type 2 diabetes 10 years in advance

AI tool could predict type 2 diabetes 10 years in advance

Researchers at Imperial College have developed an AI tool which could identify people at risk of type 2 diabetes 10 years in advance.
AI tool to help detect lung cancer deployed in Greater Manchester

AI tool to help detect lung cancer deployed in Greater Manchester

AI that helps detect diseases such as lung cancer quicker is being rolled out at seven trusts within the Greater Manchester Imaging Network.
NHS to trial AI tool that predicts health risks and early death

NHS to trial AI tool that predicts health risks and early death

The NHS in England is to trial an AI tool that can predict patients’ risk of heart disease and early death using an electrocardiogram (ECG).

6 Comments

  • William, yes that is somewhere between desirable and essential. If, as seems likely, a supplier insurance approach will be taken to limit the risks around adoption, then the insurers themselves will look much more favourably upon systems that are able to retain this history for investigation and learning purposes.

  • Good idean in priciple William but the better ML models are not easily decipherable: I recently built an ML model to identify potential admission factors which combines the output from a descision forest and neural network model. Had I used a simple regression model then it would be easy to show the weightings of each variable in the prediction, however my combination model provided a much better level of accuracy (as measured statistically) and hence much better at identifying the right factors – so I chose this model on the basis, even though it would be almost impossible to decode how the model arrived at its decision.

  • It’s worth thinking of AI tools as just tools. AI tools would form part of a care pathway. A task delegated to a colleague would likely involve multiple activities (see patient, take history, examine, come up with logical differentials, investigate appropriately). Tasks for AI tools involve fewer tasks – “make diagnosis”,”recommend treatment” etc. So liability should be around those focused
    1. Errors made by the clinician are the clinicians responsibility:
    a. Was the clinician trained to use the tool.
    b. Was it correct to use the tool.
    i. This may be moot if the decision was organisational i.e. If clinicians are required to use the tool by their employer – liability becomes an organisational issue.
    c. Was the tool used correctly.
    i. Were the inputs correct
    ii. Were the outputs understood
    iii. Were the outputs used correctly

    2. Errors made by the tool – these are the responsibility of the developer :
    a. Is there high quality transparent performance data.
    b. What is the error rate and what are the implications of an error.
    c. Does the tool report its uncertainty in its outputs – can the tool tell if it might be wrong?
    d. How are errors monitored.
    e. can an error be prevented in future – if so, how?
    f. Is good training material and support readily accessible?

  • As a patient, I would expect to be able to get an explaination when avoidable harm occurs – and at present, however much system issues might contribute, some individual health care professional is likely to be held to be responsible.
    Taking a hospital situation using an AI decision making app approved by the management (whether of not it has any support from the clinician/clinical team involved), would either GMC or NRC consider that individuals should be struck off because of faulty decisions by AI – especially if use of the AI was effectively compulsory?

  • Interestingly there was a recent launch of an eHealth insurance product providing affirmative cover for bodily injury arising fromboth the advice provided by companies and practitioners as well as bodily injury arising from technology failures and cyber events. Market-leading cyber and privacy cover is included as standard, and the policy also provides cover for technology Errors & Omissions, breach of contract, and for wearables and self-monitoring healthcare devices, cover for failure to perform. I am sure that our insurance market will continue to develop flexible and innovative solutions in this fast moving arena.

  • The only answer to the black box problem is for the AI system to record, alongside its final decision/output, the “chain of thought” by which it arrived at its conclusion.

    As well as allowing the responsible humans to apportion possible blame, it would be good feedback for the developers (or indeed the system itself, if sufficiently “intelligent”, to facilitate improvement. Just as humans learn from their mistakes.

    William

Comments are closed.