But competent users of AI will not do this. It is the equivalent of repeating verbatim what a trainee has said - no decent lawyer should make that mistake.
It’s important also not to extrapolate from a handful of incidences. The broader picture is that:
(a) hallucination rates are dropping. The LLMs are much more accurate now than even a year ago. In a couple of years’ time, I’d expect hallucinations to be minimal;
(b) sophisticated users can further reduce hallucination rates by engineering prompts and background instructions so as to prioritise accuracy above delivering a solution at all costs;
(c) the free models are more much error prone than the professional, paid for versions. I suspect an individual barrister, as opposed to a city law firm building professional level LLMs into their research suite, may not have used the highest quality option; and
(d) humans are also error prone. Maybe not the extent of making up whole cases but in other respects. The comparison will be between one slightly flawed but slow and expensive resource, versus another slightly flawed but quick and cheap resources.
Anecdotes about mistakes with entry level AI are comforting, and support a narrative that human jobs are not really threatened. However anyone who is following developments closely, and actively using AI on a day to day basis, would see that as false comfort.