The importance of healthcare professionals to share data is rising but so does the need for appropriate cyber security. Matt Lock, technical director at Varonis explores the security issues that must be considered as a priority.
The ability for healthcare professionals to share data, at scale, to advance cutting-edge medical research, has never been more important. The quicker, easier and more cost effectively this access can be given, the better. In this vein, the NHS is creating seven new “data hubs” of patient information that can be accessed by researchers around the world in the hope of finding cures for deadly diseases and ailments.
While this data might be a treasure trove for those doing medical research, it is also one for unscrupulous cybercriminals wanting to use the data for their own nefarious purposes. These data hubs also raise privacy concerns around patients’ data, particularly in relation to the GDPR.
The data hubs have been created by Health Data Research (HDR) UK, which is a collaboration between 10 different organisations including the National Institute for Health. Each of the seven hubs will contain information about different illnesses and conditions. For example, DATA-CAN will give cancer researchers access to information of those who have been treated for cancer. Up to 120 different research groups, such as Cancer Research UK and several universities, will have access to relevant patient data for the area they are studying.
Given that this is the most sensitive of information about millions of people in the UK and will be accessed by potentially thousands of different researchers, there are a number of security issues that must be considered as a priority.
HDR has assured patients that all data will be anonymised. Personally identifiable information, such as name, patient number and address, will be removed before the data enters the hub to minimise the risks associated with identifiable data. The thinking is that anonymised data, in the event of a breach, will be of no use to cybercriminals. Anonymising data means the hubs will not fall under the jurisdiction of the GDPR, as anonymised data is exempt.
However, resourceful threat actors can potentially de-anonymise the data by cross-referencing different sets of information, as a study by Belgium’s Université Catholique de Louvain (UCLouvain) and Imperial College discovered. As such, any organisation wishing to share personal data cannot rely upon anonymisation in isolation to protect the identities of those whose medical records are added to the data hubs. Those handling anonymised data should therefore use the same controls as they would if the information was identifiable.
Ensuring healthy access
One of the most important measures any organisation can take to reduce their risk profile is to place controls around data access. If there is unrestricted access to the data, it will be much easier for a threat actor to steal sensitive information without having to escalate their access permissions.
Clearly, checks and measures need to be in place to ensure that only those required to use the data specifically for research have access. A policy of “least privilege” should be introduced, where users can only access that information required for their role. In the case of medical research this should be based upon data sets relating to specific indicators, such as type of cancer, demographics or type of treatment. Such access must be controlled through the use of credentials, such as a username and password, along with another form of authentication, for example a code sent via text message.
It is worth bearing in mind that this data is going to be used by potentially thousands of researchers across many different institutions, which will each use different systems and processes for data handling. Therefore, those running the data hubs must perform due diligence and make sure anyone accessing the information is trained to minimise the risks of a data breach. This includes basic best practices, from knowing how to spot a phishing scam and good password hygiene, to keeping credentials private and leaving workstations locked when not in use
To this end, passwords should be set centrally to expire after a certain period of time. This encourages users to change their passwords regularly to mitigate the risk of them being used in the long term if stolen by threat actors. It also means that former employees will soon be denied access in the event that an administrator neglects to remove their permissions. However, a surprising amount of organisations do not do this, as research from Varonis showed that out of 758 businesses, 38 percent of users had passwords that never expired.
Creating extra-long passwords should also be a common practice, as each character added exponentially increases the complexity of the password making it more difficult to crack. For instance, the number of different combinations necessary to crack a six-character password is 200 billion. However, a password of 14 characters offers 17 septillion combinations, which would take more than a lifetime to hack through brute force.
A prescription for data security
Controlling access to sensitive medical information, whether anonymised or not, should be the top priority for those administering any patient data sharing initiative. In this way, researchers can be sure to protect the data of patients while also finding treatments to protect them from the ravages of disease.