As phishing becomes industrialised with its own business model and role-based ecosystem, Davey Winder looks at how we can protect the NHS from this threat.  

Recent research from cyber security outfit Agari revealed that an astonishing 99% of NHS email domains have inadequate phishing attack protection.

The UK Healthcare: DMARC Adoption Report focussed, as the name suggests, on Domain-based Message Authentication, Reporting and Conformance (DMARC) which is designed to validate emails and help prevent domain spoofing, and thereby make phishing a much harder social engineering trick to pull off.

As Digital Health reported, according to Agari, “95% of key UK healthcare organisations have no DMARC policy in place, despite the majority of phishing emails carry fraudulent healthcare domains.”

Spoofed domains

How much volume of email traffic, both unauthenticated and using spoofed healthcare domains, did Agari see?

Three times as much as the government sector which sat in second place in this hall of shame, with some 92% of healthcare domains carrying fraudulent email. What’s more, after analysing 5,000 NHS domains, the researchers concluded that more than half the emails received by patients were actually fraudulent.

All of this despite NHS Digital having approved the latest Secure Email Requirements Specification (SSCI596) back in January 2017, mandating DMARC usage ”as soon as possible” across trusts, boards and even private providers of service to the NHS.

There are some clear difficulties here, legacy system hurdles to overcome before DMARC can be adopted as NHS Digital has decreed. However, a July 2017 review of progress reiterated that the secure email standard must be met as soon as possible.

As of mid-November last year, research suggests that no more than 10% of NHS trusts and boards had self-certified such compliance. This has got to speed up, and has got to be taken more seriously. Especially when you consider how seriously the bad guys take the whole social engineering business.

Criminal economics

In an analysis of 1,019 phishing kits available on the dark market, security vendor Imperva found that a business model has emerged in the social engineering sector.

The reason being a simple case of criminal economics: security vendors and internet service providers alike have got better at spotting pop-up phishing sites and can take them down far more quickly than ever before, to keep up with this dynamic the bad guys provide kits to enable quick and easy deployments of such sites in a one-stop-shop package.

The Imperva research revealed that half of these packages belong to large families of kits with “a third belonging to three large families” meaning that “phishing kits come from a restricted number of sources.”

Indeed, Imperva suggests that phishing has become industrialised now; evolving into a “role-based ecosystem where different people with different skill sets fulfill different roles.”

This is important to understand, as it highlights how threat actors are less concerned about having to build fake sites to collect stolen credentials as this aspect of can be reliably outsourced.

Moreover, don’t think this only applies to organised cybercrime outfits – there are plenty of DIY phishing kits that are aimed at the lower end of the criminal enterprise.

So how does the average phishing attack look from the criminal side of the fence? That depends on whether the payload is a link clicking one or a malicious document.

For the former it usually involves renting a compromised server via the dark web, uploading a phishing kit to it, using hosted spam services to send the phishing emails and then processing any victims who take the ‘fake login’ bait at the lure site.

Malicious documents usually install malware to steal credentials or allow further network exploration through privilege escalation methods for example. The threat actors will still make use of compromised servers, with spoofed email domains for the mailshots, but may be much more targeted (spear-phishing) with regards to researching likely victims within an organisation.

Spear-phishing

It is spear-phishing attacks that are far more likely to impact NHS organisations, be they aimed at staff or patients.

The latter is often sadly overlooked when talking about protecting the NHS against the phishing threat, especially as it is patient data that is perhaps the most valuable target for the attackers.

Think about it. Armed with medical history for a given patient, along with their NHS number and hospital or GP they attend, it is pretty easy to construct a very believable email whatever the eventual payload may be.

That believability factor is increased tenfold if the email is sent from what appears to be an nhs.uk domain. Staff are likely to be targeted in order to allow progression into a network, with an email appearing to come from a colleague (using the same spoofed domains, along with some detailed social networking research data) either pointing them to a malicious link or containing a malware infected attachment.

Staff training

Aside from the implementation of DMARC as already mentioned, there is one thing that trusts can do to reduce the phishing threat: participate in staff awareness training.

Such hands-on educational strategies come in many shapes and flavours, from simple emailed reminders (oh the irony that unlike phishing emails these are likely to get largely ignored) to full-scale phishing simulation exercises.

The best methods are at the simulation end of the scale, which includes the ability to effectively launch a phishing attack against staff using a web-based tool.

By recording the responses, including click-throughs, attachment openings and login attempts to a dummy phishing server, it can help pinpoint where the biggest risk sits and target the awareness part of the system most effectively and automatically.

One such system is operated by the Information Security and Assurance Service (ISAS) that is part of the West Midlands Ambulance Service NHS Foundation Trust.

This displays awareness information to any user that has been caught out, and can then be used as part of a wider staff training process.

It is important to remember though, that staff who get fooled by phishing emails (simulated or not) are as much a victim as the organisation itself so shouldn’t be dealt with as bad people. Phishing awareness must be seen as an opportunity to educate and improve security within the organisation, helping staff to understand risk better, or it simply won’t work.