Time to rethink how we evaluate digital technologies in healthcare

Time to rethink how we evaluate digital technologies in healthcare
Shutterstock.com

Traditional methods of evaluation are often too slow, rigid and costly for the fast-moving world of digital health, writes Professor Kathrin Cresswell

The NHS is facing significant challenges and digital technologies have potential in helping to address these. Productivity – broadly defined as how effectively existing resources are used – is frequently used as a core metric to assess the success of technologies.

However, while digital tools have become more widespread in healthcare, there has been little evidence of a corresponding increase in overall productivity of the health system. This reflects the so-called productivity paradox, identified by economist Robert Solow in the 1980s, who said: “You can see the computer age everywhere but in the productivity statistics”.

A major reason for this disconnect is that impact evaluations of digital technologies tend to measure the wrong things – focusing on what is easily quantifiable in the short-term rather than longer-term transformative and emergent benefits.

Productivity is typically assessed throughĀ time savings, especially in clinical environments, where interventions are evaluated based on how they affect the time clinicians spend on specific tasks.

This may work when wanting to assess short-term, localised outcomes resulting from automation of specific tasks, such as the time taken to generate a discharge summary with the help of an ambient scribe, but time savings don’t capture what people do with time saved. They offer only a narrow, proxy measure – a snapshot.

Assessing productivity through time savings also doesn’t work for digital technologies that transform care delivery including large infrastructural upgrades such as electronic health records (EHRs).

Transformational benefits of such technologies can be unanticipated (e.g. physical space saved by reducing paper notes), are hard to measure (e.g. changes in work practices), and emerge very slowly as organisations and users learn to accommodate the new system and exploit its functionality.

They also commonly requireĀ organisational transformations, workflow redesign, and behavioural adaptation of a variety of stakeholders, none of which are easily measured. As a result, productivity often declines immediately after implementation.

So how should we evaluate the impact of digital technologies in healthcare?

Track unanticipated outcomes

Firstly, we need to view productivity not as aĀ benefitĀ but as a short-term outcome that may lead to a benefit further down the line. For example, time saved by clinicians (outcome) may be spent on a longer lunchbreak therefore reducing burnout (benefit). Logic models specifying inputs, activities, outputs, outcomes and impacts can be helpful tools in this respect.

Secondly, it’s essential to track unanticipated outcomes and benefits – both positive and negative – as they emerge over time. These effects often only become visible through real-world use and over extended time periods, making them difficult to predict at the outset.

For example, EHRs were initially implemented to support clinical documentation and information sharing, yet their value has since grown to include secondary uses such as data analytics, research, and public health surveillance.

These benefits were not included in the original justification for adoption. This highlights the importance of revisiting and updating logic models throughout the technology lifecycle, allowing understanding of impact to evolve alongside a technology’s use and over time.

View benefits in context

Thirdly, benefits need to be viewed in context, so accounting for organisational or clinical transformations as part of this process is crucial. Our own research found thatĀ ā€˜benefit’ is not a neutral term. Gains for one stakeholder – for instance, improved organisational efficiency – often mean compromises or disbenefits for another, such as increased data entry workload for clinicians.

Similarly, faster diagnostic processes facilitated through AI may lead to increased error rates as clinicians over-rely on new technologies.Ā Such trade-offsĀ are an inevitable part of transformation and may offset localised productivity gains, but impact evaluations need to take these into account to assess the overall impact of technologies on health systems.

A typical randomised controlled trial can cost up to £1 million and take two years to complete

If we are serious about digital transformation – not just automation – we need to rethink how we evaluate impact.

Traditional methods, particularly randomised controlled trials (RCTs), are often too slow, rigid, and costly for the fast-moving world of digital health. A typical RCT, considered the gold standard in evidence generation, can cost up to £1 million and take two years to complete. It also relies on predefined outcome measures.

While RCTs are effective for evaluating drugs and public health interventions, they are poorly suited to assessing the impact of digital technologies, especially transformative ones.

This is because digital transformation involves many stakeholders, unfolds across diverse contexts, and often produces benefits that take years to materialise. The impact is not always linear or predictable, and traditional evaluation methods struggle to capture emergent, indirect, or long-term outcomes and benefits. Developing new methods to evaluate impact is therefore essential.

We must navigate a key tension between implementation needs and research agendas: evaluations must provide timely, practical insights to guide ongoing implementation and investment decisions, while also generating deeper understanding of the longer-term, transformational effects of technology.

Key to addressing this dilemma will need to be the establishment of a closer working relationship between implementers and the academic community.

Kathrin CresswellKathrin Cresswell is professor of digital innovations in health and care at the University of Edinburgh. She was the lead researcher on the evaluation of the NHS AI Lab.Ā 

Subscribe to our newsletter

Subscribe To Our Newsletter

Subscribe To Our Newsletter

Sign up

Related News

81% of NHS trust leaders say there is insufficient funding for digital

81% of NHS trust leaders say there is insufficient funding for digital

81% of NHS trust leaders say they do not have sufficient funding to invest in digital transformation, according to a survey by NHS Providers.
Digital leadership earmarked for transfer from ICBs to providers

Digital leadership earmarked for transfer from ICBs to providers

Digital leadership and transformation are being reviewed for transfer from ICBs to providers, according to a blueprint from NHS England.
Abolishing NHSE diverts attention from local staffing shortages

Abolishing NHSE diverts attention from local staffing shortages

Upheaval at the centre is a distraction which could worsen shortages of digital expertise on the frontline, argues Dawn Greaves

Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.