

A policy I saw coming out of an NHS (UK) department mandated ‘human-in-the-loop’ which is essentially what the article mentions in the end. The risk is that over time clinicians may become complacent with ‘good enough’ and don’t bother to review thoroughly. And it may be easy to spot mistakes, but not necessarily omissions unless you keep your own notes. More so after a long session, although medical appointments are typically short and focused.
On a positive note, in my experience clinicians using LLMs do indeed spend more time engaging with service users. In an ideal world, they would be given time to engage and take notes, but this is not going to happen.



Yes, and you might also say that time-starved humans just reviewing LLM output may generate more accurate reports than having to write them from scratch in a rush. That’s until humans get complacent or are expected to do even more per minute. But there is a fundamental difference. Unlike humans, LLMs don’t understand context and don’t do sanity checks. When they hallucinate they can do so wildly, without a sense of implications, but always with confidence.