Meet the Other Phone. Only the apps you allow.

Meet the Other Phone.
Only the apps you allow.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

OMG. AI hallucination in police intelligence report used to decide on Maccabi's game

98 replies

PerkingFaintly · 14/01/2026 12:16

It's like all the possible disasters rolled into one!

https://www.bbc.co.uk/news/live/c394zlr8e12t
11:54
What did the report say about the West Ham match that didn't exist?
The intelligence report that referred to a non-existent match between Maccabi Tel Aviv and West Ham has not been published in full.
But it was referred to by Lord Mann during a Home Affairs Committee session on 1 December.
He said: "Early on in the intelligence report, it says: 'The most recent match Maccabi played in the UK was against West Ham in the Europa Conference League on 9 November 2023. This was part of the '23-24 European campaign. It marked Maccabi Tel Aviv’s last competitive appearance on UK soil to date.'
"That is in the intelligence report, but that did not happen. West Ham have never played Maccabi Tel Aviv.

[1145]
Chief constable's survival hangs in the balance
For [Chief Constable] Craig Guildford it is hugely embarrassing to have to admit that his officers did use AI - after he had personally told MPs that they did not.
That use of Microsoft Copilot led to information going into the report that referred to a game between West Ham and Maccabi Tel Aviv that had never actually happened.

Police chief admits misleading MPs after AI used in justification for banning Maccabi Tel Aviv fans

An intelligence report referred to a football game that never happened - Home Secretary Shabana Mahmood will make a statement later today.

https://www.bbc.co.uk/news/live/c394zlr8e12t

OP posts:
Thread gallery
7
ThrowingDi · 14/01/2026 12:18

How big of a deal is this, as in will people be sacked? Or slap on the wrist?

HappyFace2025 · 14/01/2026 12:22

He should be sacked, no question.

Mixerfixer · 14/01/2026 12:24

Just goes to show that it's best to avoid AI and Co-pilot.

Interested in this thread?

Then you might like threads about this subject:

FuckRealityBringMeABook · 14/01/2026 12:45

AI is such bullshite. Any time savings are lost on fact checking down the line.

canklesmctacotits · 14/01/2026 12:46

This is an interesting test of the limitations of AI usage in the workplace. Nowhere near enough thought has gone into this. AI is a tool that some but not all employees use. Ultimately the onus is on the employee to turn out factually correct, high quality work at this level of seniority. Basic errors like this aren’t acceptable and it’s no good blaming the tool. The junior officer who lied about using AI should be disciplined; if the junior officer wasn’t asked and didn’t lie but this CC replied falsely on his behalf, he (the CC) needs to be disciplined. In this case it’s easy enough, because a simple glance at the fixtures listing is all it would have taken. But what about when AI is used for more complex tasks, tasks which perhaps the employee in question couldn’t do manually in the first place? Then who’s to blame?

Bagsintheboot · 14/01/2026 12:57

This isn't new! I work in tax and we've seen cases go to tribunal where defendants have used AI-hallucinated precedents to try and support their argument.

It's absolutely shocking.

NeverDropYourMooncup · 14/01/2026 13:01

Disregarding everything else that may make him an unsuitable post holder, I think it's very feasible that as far as he knew and was told that he wasn't lying when he said they didn't use AI.

It could very well be that somebody decided to use it despite being paid to think for themselves; I've seen that happen in front of me when somebody more senior stuck technical questions into ChatGPT despite me sat in front of him saying 'that's not true', 'that does not exist', 'look, I've got the software open here - there is literally no such module'. And he's gone on to tell management his version 'Fred's a senior member of staff with years of experience, his knowledge...' 'He bunged it through ChatGPT and I told him it was hallucinating' 'Fred doesn't use ChatGPT, he's a senior consultant with years of experience, his knowledge outweighs your belief...' 'HE PUT IT THROUGH CHATGPT AS I SAT THERE'.

HappyFace2025 · 14/01/2026 13:10

He lied to MPs long after the event. The ChatGP must have surfaced before the MP enquiry took place, surely?

WandaW · 14/01/2026 13:12

This is actually pretty terrifying

We are losing our grip on the truth.

spannasaurus · 14/01/2026 13:17

There's been a couple of barristers/solicitors who have been caught using ficticious AI case law. There were also several instances of AI hallucinations in the judgment for the Sandie Peggie employment tribunal.

givemushypeasachance · 14/01/2026 13:25

Senior people in my public sector organisation keep pushing for AI to be used for more and more tasks, including generative AI. Using Copilot to summarise correspondence, using other AI tools to write the first draft of reports. Their argument is always that "it'll be safe because the member of staff responsible for this task will check the output". Like bollocks will they. Humans are lazy and even if they start off checking thoroughly, after the first 99 times when it looked okay they will start to trust the AI and will check less and less thoroughly, and before you know it boom AI hallucinations in official records being used for decision making.

PerkingFaintly · 14/01/2026 13:29

NeverDropYourMooncup · 14/01/2026 13:01

Disregarding everything else that may make him an unsuitable post holder, I think it's very feasible that as far as he knew and was told that he wasn't lying when he said they didn't use AI.

It could very well be that somebody decided to use it despite being paid to think for themselves; I've seen that happen in front of me when somebody more senior stuck technical questions into ChatGPT despite me sat in front of him saying 'that's not true', 'that does not exist', 'look, I've got the software open here - there is literally no such module'. And he's gone on to tell management his version 'Fred's a senior member of staff with years of experience, his knowledge...' 'He bunged it through ChatGPT and I told him it was hallucinating' 'Fred doesn't use ChatGPT, he's a senior consultant with years of experience, his knowledge outweighs your belief...' 'HE PUT IT THROUGH CHATGPT AS I SAT THERE'.

Shock That's shocking... but also not surprising. FGS!

OP posts:
noblegiraffe · 14/01/2026 13:34

Teachers are being told by the government to use AI to produce lessons and teaching resources.

Whenever I’ve tried, the output has been either complete bollocks, or contained hard-to-spot mathematical errors.

It’s supposed to reduce workload and save time but it’s more like having an irritating incompetent colleague you have to keep correcting on top of your own work.

OhDear111 · 14/01/2026 13:38

Sadly we don’t have the most intelligent people in the police. This “fact” was easy to check but they didn’t bother. One assumes the CPO should not have to check everything himself! Standards just keep plummeting! Buck stops at the top though!

theDudesmummy · 14/01/2026 13:45

I thought for a while AI might be helpful (I work in a legal field). Absolutely not! It hallucinates cases all the time. I have tried asking all the different tools (GPT, Gemini, Grok, DeepSeek etc) to provide me with a summary of the case of "R v [my full name]". There is no such case. Each of them came up with a different description of the "case", I was a murderer, a stalker, a child abuser, a fraudster, a drug addict. Each one with a lot of detailed and real-sounding details including the court, the date, reference number etc etc. Really disturbing.

newrubylane · 14/01/2026 13:55

Why aren't reports like this being referenced so that fact sources can actually be checked. Even without AI, there's so much scope for error with any kind of online research!

NoWordForFluffy · 14/01/2026 13:57

Bagsintheboot · 14/01/2026 12:57

This isn't new! I work in tax and we've seen cases go to tribunal where defendants have used AI-hallucinated precedents to try and support their argument.

It's absolutely shocking.

A barrister has been referred to the Bar Standards Board for using AI-hallucinated case law in a trial!

PerkingFaintly · 14/01/2026 14:07

theDudesmummy · 14/01/2026 13:45

I thought for a while AI might be helpful (I work in a legal field). Absolutely not! It hallucinates cases all the time. I have tried asking all the different tools (GPT, Gemini, Grok, DeepSeek etc) to provide me with a summary of the case of "R v [my full name]". There is no such case. Each of them came up with a different description of the "case", I was a murderer, a stalker, a child abuser, a fraudster, a drug addict. Each one with a lot of detailed and real-sounding details including the court, the date, reference number etc etc. Really disturbing.

Wha....at?

We need to stop calling it Artificial Intelligence, don't we?

A lot of it just seems to be Language Generation. Here's a set of key words: generate a 2000-word assignment that looks like other pieces of text you've seen which contain these key words.

OP posts:
GardyLou · 14/01/2026 14:08

He cannot be sacked by the Home Sec. She wants her power to sack re-instated.

She has no confidence in him and his failures.

PerkingFaintly · 14/01/2026 14:09

Ah, and Wikipedia tells me that's pretty much what IS happening:

https://en.wikipedia.org/wiki/Artificial_intelligence#GPT

GPT
Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pre-trained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations". These can be reduced with RLHF and quality data, but the problem has been getting worse for reasoning systems.[124] Such systems are used in chatbots, which allow people to ask a question or request a task in simple text.[125][126]

Artificial intelligence - Wikipedia

https://en.wikipedia.org/wiki/Artificial_intelligence#GPT

OP posts:
AreYouSureAskedNaomi · 14/01/2026 14:10

WandaW · 14/01/2026 13:12

This is actually pretty terrifying

We are losing our grip on the truth.

This

First the judiciary in the Peggie tribunal. Now the police.

This is very serious

What next? A doctor asks AI to calculate a dose and a patient is harmed or dies?

AI shouldn't be a free pass to do whatever you want without consequence. Professional standards and consequences for misconduct should apply regardless of the tools used

1apenny2apenny · 14/01/2026 14:13

Frankly I think they wanted to stop the fans after pressure from certain sections of the Muslim community. They just looked for the easiest way of dealing with this and thought they would get away with it.

I think what we should be asking is/was there any pressure on the chief constable from higher up in the police or government? If so what was said and by whom.

This appears to be an ongoing issue with a specific community seemingly being able to put pressure on authorities/have their voices heard and the authorities caving immedyto pressure/putting their demands first (see also grooming gangs). I hope someone whistle blows and gets it all out in the open.

MrsTerryPratchett · 14/01/2026 14:18

NeverDropYourMooncup · 14/01/2026 13:01

Disregarding everything else that may make him an unsuitable post holder, I think it's very feasible that as far as he knew and was told that he wasn't lying when he said they didn't use AI.

It could very well be that somebody decided to use it despite being paid to think for themselves; I've seen that happen in front of me when somebody more senior stuck technical questions into ChatGPT despite me sat in front of him saying 'that's not true', 'that does not exist', 'look, I've got the software open here - there is literally no such module'. And he's gone on to tell management his version 'Fred's a senior member of staff with years of experience, his knowledge...' 'He bunged it through ChatGPT and I told him it was hallucinating' 'Fred doesn't use ChatGPT, he's a senior consultant with years of experience, his knowledge outweighs your belief...' 'HE PUT IT THROUGH CHATGPT AS I SAT THERE'.

This may be true. There was someone in my team, thankfully now gone, who was repeatedly told not to use AI, and used it anyway. Not to polish or suggest wording, that’s fine. But to write things in their entirety. We have to go to Hearings fairly regularly and it does hallucinate law. Particularly picking bits of law from lots of jurisdictions that aren’t true or relevant in ours.

Declutteringhopeful · 14/01/2026 14:19

noblegiraffe · 14/01/2026 13:34

Teachers are being told by the government to use AI to produce lessons and teaching resources.

Whenever I’ve tried, the output has been either complete bollocks, or contained hard-to-spot mathematical errors.

It’s supposed to reduce workload and save time but it’s more like having an irritating incompetent colleague you have to keep correcting on top of your own work.

Every single kid is putting homework into AI.

design a poster on home safety - bung it into AI
bullet the main points of homeostasis

children and teenager are using it daily to solve homework write homework I would estimate 90% of year 10 and above are using it daily. It is scary they are losing the ability to read, critically think or infer from resources and books. It’s too much effort for them and AI its free.

Meadowfinch · 14/01/2026 14:20

FuckRealityBringMeABook · 14/01/2026 12:45

AI is such bullshite. Any time savings are lost on fact checking down the line.

This.

As a consultancy, we have been banned from using AI until further notice because a significant proportion of the output is simply wrong.

Swipe left for the next trending thread