Digital Health News reporter Laura Stevens explores how the brave new world of artificial intelligence is now being applied to healthcare, the huge potential opportunities and the new ethical and privacy challenges it raises.
The unsettling yet fascinating power of artificial intelligence is a favourite dystopian trope for film-makers. From robots taking over the world to falling in love with an operating system, the future seems to be disconcertingly jam packed full with this particular technology.
However, stepping back from Hollywood into the world of the NHS, how much do these fantastic scenarios relate to healthcare reality?
Firstly, while it may not be a mature technology, AI is definitely not a tool from the future; it’s in use right now by allowing researchers to compute vast amounts of data and replicating clinicians’ professional opinions.
The computational power of AI has been demonstrated in dermatology, cardiology and cancer research, where its analysis has provided an unbiased support to clinical opinion.
Secondly, there are huge challenges facing the introduction of this cutting edge technology into the health service, from creaky IT infrastructure, unverified data and patient data confidentiality.
The altruistic power of AI
Nature recently published the results of a Stanford University study that found algorithms matched dermatologists when identifying skin cancer in photographs. The machine learning was trained on 129,450 images of 2032 different diseases, and when tested against 21 clinicians it achieved “performance on par with all tested experts”.
Roberto Novoa, a clinical assistant professor at the university and co-author of the study, said that while further research was needed as it was a “proof of concept study”, there is “significant potential for AI within dermatology”. The possibilities chiefly lie with smartphones being able to “dramatically improve access to life-saving medical care”.
The study said that the technology can “potentially provide low-cost universal access to vital diagnostic care” through mobiles, meaning there is “the potential to profoundly expand access to vital medical care”.
Brett Kuprel, a fellow co-author on the study, described the “automated diagnosis of skin cancer” as having the power to “help people in rural communities and poor countries who may not have access to premium healthcare”.
Predicting when you die
Another AI trial that made headlines recently was the MRC London Institute of Medical Sciences’ research into computers predicting with 80% accuracy when a patient with a heart disorder will die.
The software used advanced image processing to build up a virtual 3D heart (as shown below), which when combined with eight years worth of patient data, could predict survival rates.
Declan O’Regan from the institute led the research and explained the team studied patients with pulmonary hypertension, which often affects young people and rapidly leads to heart failure. “For the treatment what is important to know is the risk that an individual patient won’t be survive 12 months”, he explained.
However, these predictions can be difficult given the number of tests available and knowing what weight to give to each, so “that was the motivation for using this AI approach” as many different tests could be interpreted simultaneously and very rapidly.
AI doing research humans could never do
The sheer power of AI to process vast quantities of information is something also noted by Chris Bakal, a team leader at the Institute of Cancer Research. While for decades, decision making and interpretation has been done by humans, “now AI allows us to take this information and make decisions using an unbiased way and using quantitated information”, he said.
“I think that information is going to have to be processed by AI because it’s literally so much information, so complicated, that humans can’t do it.”
But while the processing can be done by technology and it is likely to be an aid to decision making shortly, Bakal is clear that “for at least a long time, the clinician is going to have the final decision”.
“Artificial barriers” for AI in healthcare
As AI relies on learning from huge amounts of data, it needs to have access to said data. For O’Regan, this is where the challenges lie as you have to link confidential information to companies who can analyse it.
“We need to break down some of the artificial barriers that might prevent machine learning being used more in clinical work”, O’Regan said. “There are issues around confidentiality which are important to maintain, but it’s finding smart solutions that can enable machine learning to be used in healthcare”.
The DeepMind and Royal Free debacle
Mention AI and the NHS, and you can’t miss out the controversy that’s stalked the Royal Free London NHS Foundation Trust and Google DeepMind’s work on its acute kidney injury app, ‘Streams’, despite the app being billed as not using AI. New Scientist revealed in May last year the partnership had involved giving the company “a huge haul of patient data”.
As a result there was a huge public backlash and an on-going investigation by the Information Commissioner’s Office. However, Royal Free stuck to its guns and confirmed a five year deal with DeepMind in December last year.
DeepMind has also been involved with other NHS trusts. These include Imperial College Healthcare NHS Trust to deploy Streams; University College London Hospitals NHS Foundation Trust in a research partnership for head and neck cancer; and at Moorfields Eye Hospital NHS Foundation Trust to apply machine learning algorithms to automatically detect and segment eye scans.
Dodgy data and shaky infrastructure
There are not only patient confidentiality issues. Owen Johnson, a senior teaching fellow in computing at the University of Leeds, said there is a huge problem with implementing this technology in the NHS.
“The NHS has underinvested in its core infrastructure, and it needs to invest in its core infrastructure, as it cannot keep putting smart technology on top of shaky technology”, he said.
The fragility of IT infrastructure is a common refrain across the NHS. Just this month, St George’s University Hospitals NHS Foundation Trust reported a lack of investment has “resulted in an ‘end of life’ infrastructure that is likely to fail and result in catastrophic implication for the Trust in terms of corporate and clinical systems failures”.
In December, Johnson’s local hospital, Leeds Teaching Hospitals NHS Trust believed 30 out of its 300 most critical IT systems and archived records “may fail without warning” due to being held on old systems and insufficient data storage and computers.
Bakal agreed with Johnson’s concerns about infrastructure. For AI, “the computational infrastructure is quite heavy and so there’s no way it’s going to be at most clinics in the NHS”, he said. To counter this, Bakal said the power of cloud computing could be utilised.
Johnson also pointed out that that “the data that AI is basing its work on is fallible through ordinary human error and practice”. While the data may be safe for clinical practice, “that doesn’t necessarily mean that reusing that data for an AI engine can be done safely or reliably”, he said.
An extra pair of “belts and braces”
For most of the AI experts I spoke to the conclusion was that AI will shortly be in use in a clinical setting, but as an aid to decision making. As Johnson described it: “an additional pair of belts and braces”.
But there is inescapability to the impact of AI on healthcare, says O’Regan, “to be able to fully exploit the increasingly rich information that’s available about patients’ health, I think it’s inevitable really, we’re going to have to use computers more often to make better sense of the data”.