Is AI coming for your doctor’s job?

News Flash…
“As we enter the year 2027, the all knowing iNHS Super Intelligence – HUNT – H.ealth U.nder N.ew T.echnolgy, faces attacks and sabotage as out of work doctors, A.K.A. “Lub-Dub-ites”, continue to throw their stethoscopes into the gears…”

This week my attention was caught by this interesting Pulse article – “Artificial intelligence to replace call handlers in NHS 111 app”.

1.2 million patients in North London are to be offered assessment and advice via an Artificially Intelligent Chatbot using an App provided by Babylon, a private firm who already offer private video chat GP appointments for £25.

You can check out their app here – Apple StoreGoogle Play Store.

This news was met with sensible calls from Medical and Patient groups to ensure that the technology does not put patients at risk or overwhelm A&E and GP surgeries through inappropriate advice.

However, the news did get me wondering…

 

Are we on the cusp of a technological revolution in patient care?

… And if so, what does this mean for doctors?

Artificial Intelligence (AI) has been one of the most talked about technologies of 2016. Google, Apple and Microsoft have been adding features to their natural language Personal Assistants. Amazon launched the surprise tech hit the year in form of the Alexa smart speaker. Google’s Deepmind AI beat the world’s best human player at Go, a complicated Chinese chess like game. The tech giants have also been promoting Chatbot technologies which allow customers to deal with AI agents using text and speech rather than needing to speak to human customer support workers.

Perhaps we are on the cusp of an answer to the problem of exploding patient demand and expectations for 24/7 access to rapid assessment?

 

Imagine what may (nearly) be possible…
  • The patient is able to open an app on their phone 24/7 and immediately talk to an AI chatbot doctor.
  • They receive effective advice about how to manage their problem.
  • Those not unwell are empowered to manage their own treatment.
  • Those in genuine need of medical assessment are directed to GPs and A&E departments where they can receive prompt treatment by staff no longer overwhelmed by other patients who don’t need to be there.
  • All this without expensive and troublesome staff in the loop.
  • Capacity is limitless.
  • It is cheap, freeing up resources to be spent elsewhere in the health service.

This sounds great! But my mind then raced forward with a pang of concern…

 

Do doctors need to start worrying about computers pushing them out of the workforce?

Another popular story in 2016 was the fear that AI and robots are about replace large swathes of the workforce in a wave of disruption.

Technological progress has displaced workers in the past. In 1811-16 textile workers in Nottingham destroyed weaving machinery which threatened their jobs. According to the popular myth, the Luddites threw their shoes into the gears. More recently Kodak, the once mighty maker of photographic film, went out of business as digital photography made their products obsolete.

Professions like medicine usually feel secure in the face of technological change, but they could have cause to worry.

Many believe that after repetitive manual labour, white collar and professional jobs are also vulnerable to AI automation. Interestingly, skilled manual jobs requiring high levels of dexterity and skill, such as joinery and plumbing, may be more secure. Artisanal roles such as musicians, bespoke furniture makers, and artists are considered the most safe from automation.

AI’s from Babylon and their competitors will be able to clock up the equivalent of many doctor’s career lifetime’s worth of consultations in a short period of time and use this data, perhaps cross referenced with medical records and outcomes, to refine and improve their performance.

There is some comfort however. Currently the indication is that for complex activities, such as playing Chess or Go, AI working alongside human experts outperform both humans or AI working alone. Humans and machines are thought to have complimentary skill sets.

But maybe AI needn’t work with GPs. Would a nurse or physician’s assistant working with AI decision support do the job just as well as a GP?

Maybe I should book into that plastering apprenticeship?

 

Don’t worry just yet…

Before medics leave the profession in a race to retrain as carpenters, or movie directors, a reality check might be in order.

 

Is the technology really ready?

I think that Babylon itself has some way to go with their AI chatbot. I’m sure they will deliver a good product in the end, but some (very rudimentary) testing I performed by presenting it with typical GP problems received the following outcomes.

  • IBS – See GP in 2 weeks
  • Conjunctivitis – Error message
  • Mild gastroenteritis – Go to A&E
  • Simple soft tissue knee injury – Go to A&E

Any student who has sat through a morning clinic will know that there is an enormous, complicated and nuanced level of human interaction within a medical consultation. Familiarity, relatability and trust are important and also difficult to manufacture. It is hard to see how an AI will be able to pick up on subtle clues to domestic violence for example.

 

Who to blame when things go wrong?

There is a huge legal question mark over who is responsible for decisions taken by AI. If a self driving car causes an accident, who is responsible? The car owner, the manufacturer, the software provider? Until Medical AIs can take legal responsibility for their decisions and mistakes, they would struggle to displace doctors. In medicine someone needs to manage and be responsible for risk.

Currently, most medical systems claiming to use AI manage risk by avoiding decisions or provision of advice where risk is higher. Instead they either keep a human clinician in the loop to take responsibility or have a low threshold for directing patients to seek further assessment. To make a real impact on managing demand, AIs will need to learn to be able to make tough calls.

 

Is the technology over hyped?

Gartner, a technology research and advisory company, have popularised the the idea of the “Hype Cycle – the journey a technology takes on the route to mainstream adoption. Every year they publish a report describing where they feel popular technologies are placed on this journey. In 2016, Machine Learning and Cognitive Expert Advisors were technologies at the “Peak of Inflated Expectations”. In 2010 the over hyped technologies that everyone expected to change our world included Wireless Power, 3D TVs and 3D printing. I’m still waiting for a 3D TV worth owning, let alone one without a plug. AI may still have a long journey to travel, decending through the “Trough of Disillusionment” and up the “Slope of Enlightenment” before the reaching mainstream use on the “Plateau of Productivity”.

Gartner Hype Cycle 2010
Gartner Hype Cycle 2016

Back here in 2017, perhaps I can keep my stethoscope wrapped around my neck and take comfort in the thought that we are all much more likely to be looked after by artificially aware machines in our dotage, than have our jobs taken by them.

In the meantime, I expect that we will experience a slower and less scary, but still eventually profound impact from AI and machine learning in the form of triage and decision support tools, which will enhance the effectiveness of human healthcare workers.

 

News Flash…
“As we enter the year 2027, the NHS continues to groan under the pressure of increasing patient demand, extending lifespans and increasingly costly treatments. Whilst facing criticism at home, it remains respected abroad for delivering good patient outcomes and excellent value to the taxpayer.”

 

I hope you enjoyed this post. Please share with friends and colleagues, follow me on twitter and leave a comment below.

Remember to sign up for free updates when I post new material using the “Subscribe To” box in the top right of the site.

What do you think of this post? Comments welcome :-)