In the first of a two-part series Digital Health’s Jon Hoeksma explores the potential impact of ChatGPT in the NHS, with early examples of how digital leaders are beginning to experiment with the powerful AI tool.

ChatGPT, the AI-driven chatbot that produces remarkable results from simple queries, has been the sensation of the tech world over the past few months, since launching in November.

And unless you’ve been living in a cave without wifi you are likely to have read a flurry of articles on what impact it may have. Some people believe it marks a technology inflection point; and points to the redefining of many knowledge jobs, beginning with lawyers, journalists, marketers, teachers, lecturers, software coders and possibly even doctors.

Others have speculated that it points to a post-Google world, leapfrogging the familiar search paradigm of the past 20 years, or will transform personal business and productivity tools so that emails, spreadsheets, reports and even software may all be generated by AI tools.

GPT-3, or Generative Pre-trained Transformer, from San Francisco start-up OpenAI, is a type of artificial intelligence that has the unerring ability to generate remarkably human-like text, from a short query or input text.

Rather than search for and link to content, GPT-3 instead searches existing content indexed, and then creates content tailored to the parameters provided.  The more that is published on a topic the better it does. Its sister applications from OpenAI can do the same with images and pictures.

GPT-3’s neural network is a model with over 175 billion machine learning parameters, almost 200 times larger than the largest trained language model before it.  As a result, GPT-3 is better than any prior model for producing text that is convincing enough to seem like a human could have written it.

It may not replace human endeavours, not quite yet, (and no this article was not written by ChatGPT – though it’s a disclaimer we may soon all become very familiar with) but it can provide a remarkably good first draft.

So, what are the implications for the most complex of knowledge businesses: health care?  As Moa Tse Tung famously said when asked about the impact of the French Revolution, ‘it’s probably too early to tell’, but early adopters and experimenters in the NHS and many other sectors have begun to explore early applications.

Digital Health News spoke to a range of NHS Chief Information Officers (CIOs) from the CIO Network Advisory Panel, about their early experiments with GPT.

Lisa Franklin interim CIO at Hampshire and Isle of Wight Integrated Care Board (ICB) said ” A minor example is that we were interviewing for a new ICS head of Business Intelligence for another ICB and a colleague put my interview question in.” She joked: “The results were good in terms of a model answer, but the human still got the job”.

James Rawlinson, CIO at the Rotherham Hospital NHS FT, added: “We too are starting to use for HR type stuff ‘based on the following job description give me 10 interview questions, referencing NHS England policy guidance”.

He added: “The stuff it produces is fairly vanilla, but for people like me who struggle with a blank sheet of paper, It’s incredibly useful.”

Amy Freeman, director of digital transformation at Royal Stoke University Hospitals of West Midlands NHS Trust, commented: “It will be interesting how universities and schools are going to police the use of it as it is producing credible outputs based on topics which have widely published content/data.  I wish I had it for writing my NHS Digital Academy assignments last year.”

Universities and schools are in fact some of the first scrambling to figure out how to respond.  Elon Musk tweeted ‘goodbye homework’ soon after the launch of GPT; and its use has already been banned by New York Schools and Paris University.

The FT, meanwhile, reported that a professor at the prestigious Wharton Business School, found ChatGPT outperformed some of his MBA students, scoring a solid B “Would Chat GPT3 Get a Wharton MBA?”.  What price and prestige is an expensive MBA if a souped-up chatbot can get one?

Away from university assignments, Freeman suggested that within health it could help automate routine tasks. “I think its use in health is very interesting too.  ChatGPT please write the discharge summary on patient XYZ.  Please write a leaflet on the benefits of getting fit for surgery. Etc…”

Dr Marcus Baw, GP and member of RCGP Joint GP IT Committee, said while not safe for clinical use GPT has widespread potential within the NHS for routine bureaucratic tasks: “It’ll be excellent for writing clinical safety cases, business cases, trust annual bulletins, job plans. Basically, any pointless document that has to be written and makes rough sense on a skim read but you know nobody actually reads in depth.”

Darren McKenna, director of digital at Cumbria, Northumberland, Tyne and Wear NHS FT, and vice chair of the CIO Network, argued that it was some way short of being truly intelligent.

“It still feels to me as though it’s a very clever and slick tool in a standalone form which helps summarise and present information which is already out there rather than being truly intelligent and expert.

“It does feels like we are seeing a new technology product emerge sitting somewhere between Google Search and Wikipedia with the intelligence being its ability to seem like an expert in its presentation.”

Turning to future applications in healthcare he said: “You can just see this is going to be embedded into all sorts of things.  You could imagine an EPR where you could ask it to summarise a timeline or history for a patient.

“In Word – you might ask it to produce a skeleton document or board paper (especially if it has access to corporate data) which then just needs finalised.  Or in Outlook – drafting email responses for you.”

McKenna concluded: “Watch this space…it does feel like when we tried Google search for the first time.”

Part two of this Digital Health News series will explore early clinical applications and ethical considerations of ChatGPT and GPT-3.