One of the questions that Robert Francis QC has asked during his repeated investigations into Mid Staffordshire NHS Foundation Trust is why its high death rates were hard to spot.

As the scandal started to break in 2007, Good Hospital Guide publisher Dr Foster, regulator the Healthcare Commission, regional and trust managers became mired in arguments about what the mortality statistic of the time, the Hospital Standardised Mortality Ratio, was telling them.

Both Dr Foster and the Healthcare Commission knew that the trust was showing up as a high mortality outlier on the HSMR, but there was a prolonged debate about whether this was caused by data quality and coding problems or real problems with patient care.

A report commissioned by the trust from another intelligence provider, CHKS, uncovered poor clinical note-keeping, “teething problems” with a new IT system, and an overworked coding department. It also suggested that the trust might have been miscoding what was wrong with its patients and how many complications they had.

Mortality indicators are an equation in which observed deaths are divided by expected deaths, in which the denominator (expected deaths) is calculated from a model that attempts to capture what patients were treated for and how well and fit they were when they were treated.

So, if Mid Staffs had been under-estimating the severity of the conditions that its patients had, this could, in principle, have explained its performance on the HSMR. CHKS didn’t say this was what had been happening; but it did find the trust was not an outlier on its own benchmark indicator.

Shimmy, shimmy

While this debate went on, the Healthcare Commission launched an investigation into Mid Staffordshire that took evidence from patients and staff and turned up direct evidence of very poor care in A&E and on some wards.

This eventually triggered the first of the Francis inquiries; which devoted a whole chapter to coding and mortality indictors when it reported in 2010.

Francis argued that an independent working group should be set up to look at the methodologies in use, to get agreement on “how such mortality statistics should be collected, analysed and published” and that “an impeccably independent and transparent source” should do the publishing.

The Department of Health duly set up a working party, the outcome of which was a new indicator – the Summary Hospital-Level Mortality Indicator, the methodology for and results of which are published by the NHS Information Centre.

The SHMI (pronounced shimmy) addressed some of the criticisms commonly aimed at the HSMR, aside from its interpretation at Mid Staffs. Most obviously, it changed the top half of the mortality statistic equation; what should count as an ‘observed death’.

The SHMI no longer counts one patient death multiple times, if they have stayed in more than one hospital. It also moves away from using codes that capture only 80% of patient deaths, to using data that captures all inpatient deaths – and also deaths within 30 days of discharge from hospital.

One of the reasons for the last move was that hospitals had often argued they were disadvantaged on the HSMR if they did not have a hospice close by.

On the other side of the equation, the SHMI’s denominator is based on a risk model made up of five factors; the condition the patient was in hospital for, their other underlying conditions, age, and sex, and how they were admitted.

This is fewer factors than were counted by the HSMR, and many experts argue that they fail to capture some issues that might impact on whether, or not, a patient “should” have died.

For example, there is a common concern that the SHMI fails to distinguish between patients admitted for treatment and admitted for “comfort” care; again because its local area has failed to invest in services to help patients die close to home.

The independent working group argued that 12 ‘contextual’ indicators should be published alongside the SHMI to acknowledge this issue. Six indicators, dealing with palliative care, emergency and elective care, and deprivation, are published by the NHS IC.

However, this in itself shows that the SHMI has failed to do what Francis hoped a new indicator would do – namely generate a single, unambiguous number that can tell regulators, managers and the public whether a hospital is doing well or badly (on the very limited measure of whether its patients leave alive or dead).

Getting to a QUORUM

Instead, the SHMI has simply added to the range of mortality indicators now available. NHS Choices seems to have stopped publishing the HSMR.

But Dr Foster publishes the HSMR alongside the SHMI and two other performance measures in its Hospital Guide, while CHKS still has its own risk adjusted mortality indicator, or RAMI.

This might not matter if all the mortality indicators pointed in the same direction, but one expert in the field says it is perfectly possible for a trust to do “well” on the HSMR and “badly” on the SHMI, which shows that “we don’t understand well enough how these measures generate outliers.”

University Hospitals Birmingham NHS Foundation Trust has just published a paper on BMJ Open that explains how it has created yet another mortality indicator, the Quality and Outcomes Research Unit Measure, or QUORUM.

QUORUM attempts to address some of the case mix issues with the SHMI by including many more variables in the risk model used to determine what count as ‘excess’ deaths – including deprivation, how frequently patients have been admitted to hospital, and even seasonality.

It also applies some complex statistical modelling to try and overcome another problem identified with more recent versions of the SHMI.

This is that the SHMI does not, in fact, find much variation in the performance of trusts, and certainly does not identify “outliers” worthy of the kind of “blue light” emergency managerial action that policy makers want to be able to order for “failing” trusts.

Yet, despite the complex equations and immense computing power brought to bear on the QUORUM, the lead authors of the BMJ Open paper, Daniel Ray and Domenico Pagano, conclude that while it is a better measure than the SHMI, it also fails to generate outliers.

Or, to put it another way, it still can’t show that the apparent variation between trusts is caused by anything other than “legitimate diversity caused by the play of chance.”

To make things worse, all mortality indicators remain sensitive to the quality of the data fed into them. One expert notes that while it is often said that the patients being admitted to hospital are being admitted sicker, a lot of this may be down to better coding of co-morbidities.

Improved coding of this kind can have apparently dramatic impacts on mortality rates both overall and at individual trusts; without anything ‘real’ about the care that they deliver changing.

Time to think differently

Unsurprisingly, most of the people working in this field argue it is time for public inquiries, politicians and policy makers to stop fixating on mortality measures as the only, or even the best, way to work out what is going on in NHS trusts.

Roger Taylor, the director of research and public affairs at Dr Foster, says: “People are used to quality measures that are reasonably precise, and which tell people what to do.

“For example, with food hygiene, if the bugs get to a certain level then you just go in and shut the plant down. But for the NHS it does not work.

“You are looking at a group of people who have been treated by one hospital or person, and after that any variation between them is just that, variation.”

Ray and Pagano, respectively the director of informatics and the director of the Quality and Outcomes Research Unit at Birmingham, appear to agree, effectively concluding in their paper that if QUORUM won’t do the job then nothing will.

“We might reasonably expect to see some variation in outcomes in an organisation such as the UK NHS that are properly attributable to the performance of NHS trusts,” they write. “If this is the case, then our findings question when an approach using this methodology [a single mortality indicator] may be used to assess overall hospital quality of care.”

Spotting the placards

Despite the huge effort that has gone into constructing mortality indicators over the past decade, this might not be particularly surprising. Many experts points out that just 3.5% of all deaths are deaths in hospital – and that international evidence suggests that just 5% of these might be prevented.

If trusts really want to address excess mortality, one suggests, they are going to need to find some specific interventions that are going to make a real difference.

For instance, he says he has come across trusts that are treating the arrival of a crash team on a ward as a “never event” – on the grounds only 15% of patients recover and good monitoring and care should make such extreme treatment unnecessary.

In a conversation with eHealth Insider, Ray also suggests that trusts need to start looking at what is happening to the 96.5% of patients who leave hospital alive.

“For example, if a patient who comes into hospital with a heart attack is not given a beta blocker on the day of admission, they will not die in hospital, but that will have a huge impact on their five to ten year mortality,” he points out.

As a result, Ray says University Hospitals Birmingham is focusing on telling patients more about the treatment they should be getting, on collecting information about what is happening at ward level, and on using its MyHealthRecord portal to find out what happens after discharge.

Meanwhile, many more indicators that deal more directly with patient experience – from patient reported outcome measures to the ‘friends and family’ test – should soon be available.

Taylor argues that these should make a difference – as long as people are prepared to act on them. One of the many, many problems at Mid Staffs was that it received mountains of complaints, but did little about them, not least because the board received them as bland reports.

Taylor says he’d like the Care Quality Commission to assess whether boards are capable of receiving and acting on information and “to replace them if they not”; while also looking at how to empower staff and, again patients.

After all, one expert points out, it was not statistical analysis that eventually told regulators that Mid Staffs had a problem – “it was the 120 people standing at the gate with placards.”