Chatbot vs NHS111 trial cancelled as patients game the system

Tuesday lunchtime saw a lively discussion in the staff room at Middle Street Surgery. Dr Puddle, iPhone in one hand and sandwich in the other, proclaimed that everyone’s job was safe!

Well, for a little longer at least.

He had just heard via Twitter that the North London trail to replace NHS111 call handlers with Babylon’s artificially intelligent chatbot symptom checker had been abandoned.

It seems, he went on… That patients, who have to wait days or even weeks for a GP appointment, might deliberately game the symptom checker if they think it will get them a quicker appointment.

Dr Puddle looked up and around at a room of knowing looks and unsurprised faces.

The staff room had a number of thoughts about why the trial may have failed…


1. Wrong solution for the wrong problem? Efforts to improve access when capacity is the problem will not help

There are a finite number of GPs, nurses and appointments and a seemingly unquenchable demand. Getting an appointment can often feel like a competition. When systems change, patients wise up quickly and soon learn how to maximise their chance of getting an appointment. Call earlier, ring at certain times of the day when slots are released, drop in as the surgery opens. A new equilibrium is quickly reached.

New ways of booking an appointment don’t change things for long if the problem is that there aren’t enough appointments in the first place.

Attempts to improve capacity by simply improving access can backfire and overwhelm already over stretched services.

2. Do people feel less guilt “gaming” a chatbot than a real person?

Dealing with pain and illness is frustrating, but despite this our patients will (nearly always) be truthful with practice staff. People feel bad about bending the truth with real people with real feelings.

GP staff are skilled communicators with millennia of evolution and years of experience helping them keep the system honest. Playing games with a soulless computer program is easier to do and comes with less guilt attached.

 

3. Maybe people just aren’t ready for health chatbots? Yet?

It is not uncommon for a patient to see a junior or locum member of staff and then immediately book another appointment with their “usual” GP. When dealing with their health, patients need to really trust the “agent” that is assessing them and giving treatment and advice.

Chatbots are very new and most people have never used one. It is not surprising, perhaps, that many patients feel a need to communicate with a real person before feeling reassured enough to look after themselves or “wait and see” how their condition develops.

It may simply be too soon for health chatbots..

After people have got used to trusting chatbots with their shopping, banking and fixing their internet connection, they may be more trusting of health apps.

4. Does the risk management behind chatbots need to be “better”?

In order to truly address demand and capacity, AI systems will need to become much better at saying “No” and “Not yet” to patients. Humans are able to do this well, not just because of their knowledge and experience, but because our governance and legal systems allow them to take responsibility and manage risk.

Part of the value that GPs, and indeed all human staff, add to a system is their ability to make decisions and be held accountable for them. We are paid to use judgement to manage (take) appropriate risks.

Risk management is a problem faced in many of the industries exploring the use of AI to make decisions.

If a driver makes a mistake and injures a pedestrian, it is clear who to blame, who gets in trouble and whose insurance company will need to pay out. If a driverless car injures somebody, the question of who should be held liable is more complex. Who should be sued? The vehicle owner, the manufacturer, the company supplying the software, the mapping company providing data?…

Similar questions are faced by the use of AI in medicine, particularly when rationing a limited resource, such as rapid access to doctors. Consequently these systems can set their risk tolerance to low to be useful.


Despite the cancellation of this particular trial, AI and chatbot interfaces do still seem set to affect many aspects of our lives.

15 years ago, my mum laughed at the idea of trusting the internet with her banking. It probably won’t be another 15 years before she is trusting a chatbot with her health.

The staff of Middle Street Surgery can feel safe in their jobs, but for how long…

 

Thanks for reading to the end.

If you enjoyed the post then remember to sign up for free updates when I post new material using the “Subscribe To” box in the top right of the site.

Please share with friends and colleagues, follow me on twitter and leave a comment below.

4 thoughts on “Chatbot vs NHS111 trial cancelled as patients game the system”

  1. It’s not fair really, AI is only 40 years old while its up against the single best adaptive learning machine on the planet with a serious hook for novelty thats been evolved over 5m years or 500m years depending on how you look at it! I do agree though that it was used to solve the wrong problem, capacity not access!

    1. Humans certainly have a head start on the bots, but they are currently “evolving” faster than we are. The problem here was that the AI was pitted against humans as the gate keeper to something that they already wanted. It was easily circumvented. Things might work better if this were reversed. Perhaps take people who are already waiting for an appointment and offer solutions and alternatives that might solve their problem before the time of the appointment arrives. Well maybe…

What do you think of this post? Comments welcome :-)