The author of a report exploring how Artificial Intelligence (AI) could be used in healthcare has argued “making sure the ethics are built in” will be vital to the technology’s successful application in the NHS.
Eleonora Harwich, head of digital and technological innovation at think tank Reform, co-authored “Thinking on its own: AI in the NHS“. She told Digital Health News that she believes AI technology is “absolutely incredible” and can have “huge benefits for healthcare”.
“AI has been around in the health system for years and has shown very positive results,” Harwich said. “What is new is things like machine learning which has the capacity to help make the NHS more efficient.”
But the report argues that “public safety and ethical concerns relating to the use of AI in the NHS” should be a central matter for a number of governing bodies including the National Institute for Health and Care Excellence (NICE).
Patient safety is another issue which is raised in the report as healthcare is a “high risk area”.
“The impact of a mistake could have profound consequences on a person’s life,” the report adds.
“AI systems are not infallible and are not devoid of biases.
“It is important for current regulations to be updated to make sure that the applications of AI in healthcare lead to a better and more efficient NHS, which reduces variations in the quality of care and healthcare outcomes.”
The use of AI technology in the NHS made headlines when the Information Commissioner’s Office (ICO) concluded that an AI trial between the Royal Free NHS Foundation Trust and Google Deepmind did not comply with the Data Protection Act.
The governing body ruled the trust failed to comply when it provided details on 1.6m patients to test an alerting system for acute kidney injury, named Streams.
Harwich said Deepmind and the Royal Free’s response was “quite courageous”.
“They [Deepmind] owned up and said we should have never taken the data and Royal Free said they should have never have offered it,” she added.
“That type of attitude is very good.”
Harwich told Digital Health she had always wanted to put together a report on the use of AI in the NHS and finally started pulling the Reform one together in April 2017.
“I sort of stumbled on it by accident though exploring the idea of how AI could be used in healthcare is always something I had wanted to do,” she added.
In response to the report and in particular on the topic of health data, NHS Digital issued a statement encouraging partners to actively take part in the evaluation of how it is used.
“We need to make sure that the data provided for use in AI algorithms is designed with the best interests of patients at the forefront of all decision making,” said NHS Digital’s director of data, Professor Daniel Ray.
“In specialist areas AI has great potential for success and there are good examples of this starting to happen in the NHS, but we need to understand and evaluate this to move it forwards.
“We know that health data is personal and sensitive, so there are rightly strict rules in place about how and when it can be used or shared. We need to ensure that any new developments harness the power of data but that they do so responsibly and within the legal frameworks.”
The 16 recommendations in the report were:
- NHS Digital and the 44 Sustainability and Transformation Partnerships (STPs) should consider producing reviews outlining how AI could be appropriately integrated to deliver service transformation and better outcomes for patients at a local level.
- NHS England and the National Institute for Health and Care Excellence should set out a clear framework for the procurement of AI systems.
- The NHS should pursue its efforts to fully digitise its data and ensure that moving forward all data is generated in machine-readable format.
- NHS England and the National Institute for Health and Care Excellence should consider including the user-friendliness of IT systems in the procurement process of data collection systems.
- NHS Digital should make submissions to the Data Quality Maturity Index mandatory, to have a better monitoring of data quality across the healthcare system.
- In line with the recommendation of the Wachter review, all healthcare IT suppliers should be required to build interoperability of systems from the start allowing healthcare professionals to migrate data from one system to another.
- NHS Digital should commission a review seeking to evaluate how data from technologies and devices outside of the health and care system, such as wearables and sensors, could be integrated and used within the NHS.
- NHS Digital, the National Data Guardian and the ICO, in partnership with industry, should work on developing a digital and interactive solution, such as a chatbot, to help stakeholders navigate the NHS’s data flow and information governance framework.
- NHS Digital should create a list of training datasets, such as clinical imaging datasets, which it should make more easily available to companies who want to train their AI algorithms to deliver better care and improved outcomes.
- The Department of Health and the Centre for Data Ethics and Innovation should build a national framework of conditions upon which commercial value is to be generated from patient data in a way that is beneficial to the NHS.
- The Medicine and Healthcare products Regulatory Agency and NHS Digital should assemble a team dedicated to developing a framework for the ethical and safe applications of AI in the NHS.
- Every organisation deploying an AI application within the NHS should clearly explain on their website the purpose of the app, what type of data is being used, how it is being used and how they are protecting anonymity.
- The Medicine and Healthcare products Regulatory Agency should require as part of its certification procedure access to data pre-processing procedures and training data.
- The Medicine and Healthcare products Regulatory Agency Review in partnership with NHS Digital should design a framework for testing for biases in AI systems. It should apply this framework to testing for biases in training data.
- Tech companies operating AI algorithms in the NHS should be held accountable for system failures in the same way that other medical device or drug companies are held accountable under the Medicine and Healthcare products Regulatory Agency framework.
- The Department of Health in conjunction with the Care Quality Commission and the Medicine and Healthcare products Regulatory Agency should develop clear guidelines as to how medical staff interact with AI as decision-support tools.