A blog that I follow that concentrates on family law highlighted a case where the mother in a private family law case used AI rather unwisely.
The case is D (A Child) (Recusal) [2025] EWCA Civ 1570 (09 December 2025)
https://www.bailii.org/ew/cases/EWCA/Civ/2025/1570.html
It seems to be a very complicated case, but the complexity isn't relevant to using AI.
The mother in the case disagreed with the District Judge’s decision not to make findings of domestic abuse against the father. She lodged an appeal which was considered by a Circuit Judge and refused.
After that, she wrote directly to the District Judge asking him to recuse himself due to bias. Her letter ran to 60 typed single-spaced pages.
The Court of Appeal said of this document:
"[28] The document included a number of citations of reported cases. Some citations were correct and appropriate. As subsequently pointed out by the father's counsel at the hearing before us, however, other cases cited were not authority for the propositions for which they were advanced and, in some instances, did not exist at all. At the hearing before us, the mother accepted that she has used artificial intelligence to assist her in preparing the document."
The mother also included some dodgy citations in her skeleton argument to the Court of Appeal. The Court said of this:
"[54] ... The skeleton argument cited a number of authorities. As before, some citations were non-existent cases - for example "Re M (Paternity: Appeal by Mother) [2003] EWHC 2832 (Fam)". Other cases were cited in support of a proposition for which they were not authority. For example, Re B (Children) [2008] UKHL 35, the well-known decision of the House of Lords on the standard of proof in children's cases, was erroneously cited for the proposition that "the father's conscious choice to ignore correspondence that did not assist his position demonstrates wilful evasion and further undermines his credibility". Re W (Children) 2010] UKSC 12, the equally well-known decision on the principles which should guide the exercise of the court's discretion in deciding whether to order a child to attend to give evidence in family proceedings, was cited for the proposition that "findings reached through a procedurally compromised process cannot stand"
But the judge did go lightly on the mother for doing this and said:
"[83] Finally, I return to the issue raised by the father's representatives about the mother's erroneous citation of authority (see in particular paragraph 54 above). I absolve the mother of any intention to mislead the court. Litigants in person are in a difficult position putting forward legal arguments. It is entirely understandable that they should resort to artificial intelligence for help. Used properly and responsibly, artificial intelligence can be of assistance to litigants and lawyers when preparing cases. But it is not an authoritative or infallible body of legal knowledge. There are a growing number of reports of "hallucinations" infecting legal arguments through the citation of cases for propositions for which they are not authority and, in some instances, the citation of cases that do not exist at all. At worst, this may lead to the other parties and the court being misled. In any event, it means that extra time is taken and costs are incurred in cross-checking and correcting the errors. All parties – represented and unrepresented – owe a duty to the court to ensure that cases cited in legal argument are genuine and provide authority for the proposition advanced."
.
The writer of the blog, summed up their thoughts like this:
AI is becoming a feature of family law and is likely to become more so, particularly for litigants in person. It is obviously attractive that a litigant in person can with careful prompts produce grounds for appeal and legal argument including references to authority that would be very hard to achieve even with many hours of careful research. But whilst the hallucination problem remains unfixed and AI simply hallucinates cases that doesn’t exists or cites real cases that simply don’t say what the AI quotes them as saying or deciding, it is really unsafe for anyone to rely on them without careful checking that the case actually exists and that it does genuinely say what the AI claims.
Incidentally, solicitors and barristers have also been caught doing this as well.
There was a case back in June about a homeless man in London. A junior barrister drafted and signed a High Court pleading in which she purported to rely upon five authorities that did not exist. This resulted in Mr Justice Ritchie, not only awarding wasted costs against her (and Haringey Law Centre) but also referring each to their respective regulators.
https://www.11kbw.com/knowledge-events/case/andrew-edge-successful-before-high-court-in-ai-fake-authorities-case/
There was also a case where an immigration barrister in the Upper Tribunal submitted grounds of appeal citing numerous fictitious or irrelevant cases in an asylum matter. When questioned by the judge, he appeared unfamiliar with the authorities and, in the judge’s view, attempted to “hide” his use of AI. Judge Blundell found it “overwhelmingly likely” that generative AI had been used and referred the barrister to the Bar Standards Board.
.
So, if you are thinking about using AI then do double check everything.