The Information Commissioner’s Office has confirmed that it is “aware” of concerns about a trust’s collaboration with Google DeepMind on an AKI alert system.

Google announced that its artificial intelligence offshoot, DeepMind, had launched a new division to work with NHS clinicians on developing technology to improve patient care in March.

At the same time, it revealed that it was working with the Royal Free London NHS Foundation Trust on an app called Streams.

Its first focus is to help clinical staff detect cases of acute kidney injury, which can affect up to one in six patients and is a significant cause of extended stays and even death.

The move appeared uncontroversial until the New Scientist magazine revealed that the collaboration had involved giving the company “a huge haul of patient data”.

Specifically, it said the project involved information on 1.6 million patients who had passed through Barnet, Chase Farm and the Royal Free hospitals over five years. 

The New Scientist included concerns from privacy group medConfidential that the information had been obtained without patient consent, and that it was excessive because the records did not just cover kidney function.

The concerns were widely picked up in the press and, according to Computer Weekly magazine, have led to a complaint to the ICO, arguing that the data should be pseudonymised and encrypted.

An ICO spokesperson told Digital Health News that it was “aware of it [the complaint]” and that it was “making inquiries”; although these did not amount to an investigation at this stage. 

The trust has maintained that the collaboration does not require patient consent, since the information is being used for direct patient care, for which there is ‘implied’ consent.

“Throughout the NHS, patient data is routinely collecting and processed by IT companies for the purpose of direct patient care under the principle of implied consent,” a Q&A on its website points out.

“Our agreement with DeepMind is our standard third-party data sharing agreement, with the trust being the data controller and DeepMind being the data processor. Over 1,500 third party organisations have undergone similar NHS information governance processes.”

The Q&A also seeks to reassure the public that the data is encrypted “in transit to, and at rest within the DeepMind cluster.”

The original New Scientist story said the data agreement it had obtained stated that the data could not be used for any other part of Google's business, that it would be stored in the UK by a third party, and deleted when the agreement expires in 2017.

The trust has also argued that a range of demographic and clinical data is needed to build alert algorithms, and that historical data is needed to spot relevant trends.

However, in updates on its blog, medConfidential has continued to argue that most patients will not benefit from the app, the exact use to which the data is being put is unclear, and transparency is needed to make sure that any tool that is developed is safe. 

The New Scientist has also published a further article claiming that the project has failed to secure approval from the Confidentiality Advisory Group or the Medicines and Healthcare products Regulatory Authority.

However, MHRA registration for apps is something of a grey area.

The Royal College of Physicians has argued that doctors should only use medical apps that carry a CE Mark, but its guidance has been widely criticised, and the point at which an app counts as a medical device has not been tested.

The MHRA's guidance says that decision support software is "likely" to fall within its scope if it applies "automated reasoning" in the form of guidance or calculations.

But it also says it "may not" if these "only provide information" to enable a health professional to make a clinical decision, as they "ultimately rely on their own knowledge."