Radiologist peer review should be used to improve care rather than measure performance or identify underperforming radiologists, the Radiological Society of North America 2013 conference heard on Monday.

Peer review, in which a radiological study is retrospectively reviewed by a colleague, is increasingly used both in the US and Europe as a method of quality assurance and is sometimes mandated by regulatory bodies.

Dr David Larson, associate professor of radiology at the Stanford University Medical Center, told conference delegates that peer review systems are often counter-productive, leading to a “loss of collaboration, genuine feedback, learning and trust.”

The system can create an adversarial relationship, in which the reviewing radiologist has a sense of “indignation, incredulity and blame”, while the radiologist who missed the case “feels guilt and shame.”

“Knowing the answer in retrospect creates a false sense of certainty,” he explained.

Dr Larson described an alternative method of peer review, developed at Cincinnati Children’s Hospital, in which peer review is for learning only.

In the Cincinnati system, the peer review leader removes all identifying information from the original study before forwarding it to the reviewer. Interesting cases are reviewed fortnightly by a peer review committee and radiologists are encouraged to ask:

“What might cause me to miss this and how can I prevent that? What is the take home message? What techniques have I learnt that I can use to defend myself against doing this?”

While standard peer review systems use a scoring system to grade the accuracy of studies, Larson said that after two years, scoring, along with performance-tracking, was abandoned as having no value.

The audience also heard from Dr Joseph Steele, professor of diagnostic radiology at the Anderson Cancer Center at the University of Texas. Dr Steele said that peer review can be “incredibly punitive”, and also “incredibly inconsistent, not very transparent and of questionable efficacy.”

Standard peer review systems are not adapted for use in interventional radiology, said Dr Steele, so his department instead uses ‘complication review’ to look at cases that have resulted in complications.

“Complications are very rare and it is unlikely that the same person will make the same mistake over again – but in a team, people will make the same mistakes,” he said.

His department uses peer review to “identify things that we are collectively doing right or wrong, so we have standard practices and not much deviation.”

The radiology information system and the picture archiving and communications system could be used to flag cases resulting in complications.

Dr Steele added that there is real value in structured reporting to identify data and look for correlations between data and complications.

The use of data analytics to drive the use of standardised procedures is essential, he concluded.

“This improvement has to be in the fabric of the way we practice.”

Imaging Informatics editor Kim Thomas is reporting from the Radiological Society of North America annual conference in Chicago, USA, this week. You can contact her on kimthomas@e-health-media.com.