I’ve seen a lot about artificial intelligence in the news recently. Stories about the ethics of self-driving cars; whether they would allow you to die rather than allowing others to come to harm, and whether a passenger would be happy at such a prospect.

Now people are starting to say that AI will replace doctors. I’m not convinced it’s quite as imminent a prospect as some people think, but it’s worth exploring a few issues.

OK, perhaps as a doctor I might have a vested interest in saying not everything I do can be done by a computer, but I do think they have a role. And perhaps I have a vested interest in them working. I might retire in 10-15 years and there don’t seem to be any doctors coming after me to look after me in my old age. I’d rather have a good AI than nothing.

But, interestingly, no-one seems to have talked about the ethics of AI in medicine. If people are worried that cars will take a Utilitarian view, what about AI-docs? Will they concentrate on helping the greatest number or doing the most good? Will they ignore a lot of children and old people and concentrate on getting the employed person back to work to pay more taxes? Some of you may have come across the Oregon health experiment; an AI might make different decisions, would they be better or worse?

Will they ignore smokers, people who are overweight, those who drink or who don’t follow health advice? Will they prioritise based on quality adjusted life-years (QALYs)?

Understanding subtleties

Will AI cope with the subtleties of what patients really present with, which is often different to what they say is the issue? While an AI might be good at reading an ECG and saying there is nothing wrong with it, will it pick up the fact that the patient is worried because his dad died at a similar age of a heart-related problem?

Perhaps a truly advanced AI will, but all the information I’ve seen about AIs so far is about them making good decisions based on data given – not interpreting and understanding the complexities of human communication. People might be confusing AIs with Turing machines.

There could be a role

If they are good at making rapid, reliable evidence-based decisions then I think there is a big role for AI to help doctors and make us more efficient, productive, safe and effective and – dare I say it – more satisfied and less stressed.

A huge amount of what we do is interpreting large quantities of data based on our knowledge and experience. Having an AI colleague supporting us is perhaps something to be embraced, particularly in these days of workforce shortages.

NHS England is currently running a big campaign on releasing capacity for general practice and I recently spoke at one of their events on some of the things my local GP federation is up to. I really like their top 10 high impact actions concepts. A lot of people are concentrating on the “diverting patients away from GPs” action. However, one of the others talks about improving the efficiency of processes and how this could make GPs’ lives better. It talks about things like GPs learning to touch type, or using speech recognition to speed things up.

While these are good, there are lots of other ways of allowing me to work faster and I think AI has a role. If we look at some of the things I do regularly that are often currently slowed down by technology we will find lots of examples.

Prescribing warnings

These are currently a real pain; almost every time you prescribe anything you get shown a whole load of warnings. The problem is twofold. First, most of the warnings don’t apply or aren’t important and second, the sheer fatigue of seeing loads of warnings means you start ignoring them no matter how big a red box they appear in.

An AI could intelligently show me only warnings I need to see, or intelligently suggest alternatives that might be better or more suitable.

Blood results

A huge task every day for every doctor is doing their blood results. This means reviewing the results of all the tests they have ordered on people, but also reviewing tests ordered by colleagues who aren’t in, or reviewing routine tests that have been done for the purposes of drug or disease monitoring.

There are loads of them. Most are normal but don’t say they are normal as often one indicator is just a fraction out of range. Experience has taught me when to ignore, but the computer flags it as abnormal. Also sometimes an abnormal result is expected, or it’s an improvement on what was there before. An intelligent system would know which to ignore and which not to ignore. It might also spot underlying trends that are too subtle for me. It could also do this quickly and reliably and not leave it until last because other things take priority.

Letters

I read anything from 20-140 letters a day. These might be discharge summaries or outpatient letters, or just part of the endless stream of admin or paperwork about people which the NHS creates in its infinite wisdom but which adds no value. It’s a huge piece of work trying to reduce the stuff I don’t need to see. One partner in my practice felt it wasn’t economical to employ someone to do the sorting for us, and one always wants to see anything on his patients no matter how trivial or irrelevant it is.

As well as reading the letters there are multiple actions that arise from them. Some patients might need blood tests or appointments arranging. Some need a change in medication or a new referral. For some it’s just a new diagnosis that needs coding.

Some doctors do all of this themselves; some pass the letters on to helpers. But whichever way you do it, it can be laborious and costly and prone to error. While there are automated ways of grabbing data from letters, pretty much every letter is reread by a coding clerk after a GP.

AI could automate this. It could know who likes to see what. It could filter out stuff that doesn’t need to be seen. Highlight stuff that does. Code and extract data from letters, so saving time and money.

Personal assistant

On my iPhone I find it easier to say “Siri: set an alarm for 6.30pm tomorrow night”, than I do to go into the menus and do it manually. Could we use a form of AI to take recurrent tasks from me? Could I say “I need a blood form for FBC renal and HbA1c” rather than manually requesting it?

And could AI help my note taking? Could it annotate a consultation for me, pulling out notes rather than word for word verbatim recording?

Could it know when someone says they are thirsty and peeing a lot that I’m going to do a test for sugar and HbA1c and pre-fill in the form for me?

Maybe they could take over…

Could AI start creating a differential diagnosis on screen? Prompting me to ask more questions or home in on things? I saw an expert system that claimed to get the right musculoskeletal diagnosis 99% of the time based on patients’ answers to questions. What if it could do it on listening in?

Once we get to this point, maybe we really are at the stage at which AI could take over from doctors.