Meet the Other Phone. Only the apps you allow.

Meet the Other Phone.
Only the apps you allow.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

OMG. AI hallucination in police intelligence report used to decide on Maccabi's game

98 replies

PerkingFaintly · 14/01/2026 12:16

It's like all the possible disasters rolled into one!

https://www.bbc.co.uk/news/live/c394zlr8e12t
11:54
What did the report say about the West Ham match that didn't exist?
The intelligence report that referred to a non-existent match between Maccabi Tel Aviv and West Ham has not been published in full.
But it was referred to by Lord Mann during a Home Affairs Committee session on 1 December.
He said: "Early on in the intelligence report, it says: 'The most recent match Maccabi played in the UK was against West Ham in the Europa Conference League on 9 November 2023. This was part of the '23-24 European campaign. It marked Maccabi Tel Aviv’s last competitive appearance on UK soil to date.'
"That is in the intelligence report, but that did not happen. West Ham have never played Maccabi Tel Aviv.

[1145]
Chief constable's survival hangs in the balance
For [Chief Constable] Craig Guildford it is hugely embarrassing to have to admit that his officers did use AI - after he had personally told MPs that they did not.
That use of Microsoft Copilot led to information going into the report that referred to a game between West Ham and Maccabi Tel Aviv that had never actually happened.

Police chief admits misleading MPs after AI used in justification for banning Maccabi Tel Aviv fans

An intelligence report referred to a football game that never happened - Home Secretary Shabana Mahmood will make a statement later today.

https://www.bbc.co.uk/news/live/c394zlr8e12t

OP posts:
Thread gallery
7
TheAutumnCrow · 14/01/2026 17:42

Big Sond

Big Craig

Oh deary me.

NoWordForFluffy · 14/01/2026 18:59

EasternStandard · 14/01/2026 16:24

It reminded me of that judgement too. People will be more careful if it costs them their job.

We've been directly instructed to never use it to produce work by our firm (solicitor).

EmeraldRoulette · 14/01/2026 21:44

@PerkingFaintly thanks for sharing that that's really interesting

The other thing that struck me in the coverage I saw

Police chiefs keep saying they don't use AI and that they only used Google

Do they not realise that AI is embedded in Google search results now? They must know that.

I wonder where we are now - do we just have all kinds of people thinking that they'll get away with it? With feigning ignorance, I mean.

Interested in this thread?

Then you might like threads about this subject:

EmeraldRoulette · 14/01/2026 21:48

theDudesmummy · 14/01/2026 13:45

I thought for a while AI might be helpful (I work in a legal field). Absolutely not! It hallucinates cases all the time. I have tried asking all the different tools (GPT, Gemini, Grok, DeepSeek etc) to provide me with a summary of the case of "R v [my full name]". There is no such case. Each of them came up with a different description of the "case", I was a murderer, a stalker, a child abuser, a fraudster, a drug addict. Each one with a lot of detailed and real-sounding details including the court, the date, reference number etc etc. Really disturbing.

@theDudesmummy may I ask why you thought it would be useful?

I had to work with a company dealing with early stage AI - Literally got called a conspiracy theorist on here for that - and fortunately everybody taught me that the model was fundamentally garbage in, garbage out.

It does alarm me that people are just working with it as if it's correct. And I wonder where that misinformation is coming from.

with law in particular, you've got good solid reference points that you can use. So why would you use AI?

noblegiraffe · 14/01/2026 22:35

Spot the mistake as I just asked it, as the government recommends, to generate me some maths questions.

OMG. AI hallucination in police intelligence report used to decide on Maccabi's game
PerkingFaintly · 14/01/2026 23:06

noblegiraffe · 14/01/2026 22:35

Spot the mistake as I just asked it, as the government recommends, to generate me some maths questions.

<facepalm>

OP posts:
Piglet89 · 15/01/2026 08:08

According to the Times, CG is now “lawyering up” after SM said she’s lost confidence in him.

Middle aged man in “entitlement whinge” shocker.

Shame he didn’t do that before, by getting police lawyers to check accuracy of intelligence before AI blatantly got it wrong!

PSoup · 15/01/2026 08:17

spannasaurus · 14/01/2026 13:17

There's been a couple of barristers/solicitors who have been caught using ficticious AI case law. There were also several instances of AI hallucinations in the judgment for the Sandie Peggie employment tribunal.

That must be so embarrassing (and rightly so), as it’s something I think other barristers would pick up on straight away if they’ve never heard of a certain case! Imagine explaining that in front of a judge.

EasternStandard · 15/01/2026 08:37

PSoup · 15/01/2026 08:17

That must be so embarrassing (and rightly so), as it’s something I think other barristers would pick up on straight away if they’ve never heard of a certain case! Imagine explaining that in front of a judge.

It’s just ease and laziness really so people will have to decide whether to be more thorough so it doesn’t jeopardise their job.

SerendipityJane · 15/01/2026 10:34

noblegiraffe · 14/01/2026 22:35

Spot the mistake as I just asked it, as the government recommends, to generate me some maths questions.

Don't laugh. That is how the Brexit benefits were calculated.

theDudesmummy · 15/01/2026 11:00

@EmeraldRoulette good question! In the early days of AI I guess I thought (incorrectly) that it might function as a sort of more sophisticated search engine which would help to quickly identify the key legal precedents and principles in a given situation and explain them and their relation to each other in a clear way. In a way it can, but then when it starts to cite cases it goes completely rogue! Lots of lawyers (IANAL) appear to have been caught out in this way, I believe.

I was lucky in that I attended a conference a couple of years ago now, before I ever tried to incorporate anything from AI into my actual presented work, in which the issue of hallucinations was discussed (at that time it had not been widely heard of). So I knew not to get caught out.

I do sometimes use AI to summarise a long technical/expert report, but only to identify things I need to look up, talking points etc, never to use the summary in my own actual report.

Personally I think this AI bubble will burst quite soon. (Of course increased automation etc will continue, but the idea that LLM will reliably generate usable "thinking" is being demonstrated to be completely false and dangerous).

noblegiraffe · 15/01/2026 11:16

SerendipityJane · 15/01/2026 10:34

Don't laugh. That is how the Brexit benefits were calculated.

I’m not laughing. AI is being heavily pushed in education right now, including teacher training programs.

EasternStandard · 15/01/2026 11:19

It’s an odd one. I’m testing Ds on something and there’s an error with a fact. Since it’s marked in a mock I assume, should it he use the wrong or correct stat. Not sure if it’s an AI problem causing it. Maybe.

FFSToEverythingSince2020 · 15/01/2026 11:23

SerendipityJane · 14/01/2026 14:20

As a siren voice saying how shite "AI" is for the past 5 years, do I win a prize ?

Yes! Sadly, your entire prize consists of feeling smug about news stories regarding AI being shite for the next five years. Hope that’s enough? Maybe we can get you a piece of House of Games wheelie luggage, too, just to sweeten the pot.

PetuniaT · 15/01/2026 17:54

Mixerfixer · 14/01/2026 12:24

Just goes to show that it's best to avoid AI and Co-pilot.

Absolutely! So why does just about everyone seem hell-bent on using AI?

SerendipityJane · 15/01/2026 18:02

PetuniaT · 15/01/2026 17:54

Absolutely! So why does just about everyone seem hell-bent on using AI?

Because you need it.

Snakebite61 · 15/01/2026 18:03

PerkingFaintly · 14/01/2026 12:16

It's like all the possible disasters rolled into one!

https://www.bbc.co.uk/news/live/c394zlr8e12t
11:54
What did the report say about the West Ham match that didn't exist?
The intelligence report that referred to a non-existent match between Maccabi Tel Aviv and West Ham has not been published in full.
But it was referred to by Lord Mann during a Home Affairs Committee session on 1 December.
He said: "Early on in the intelligence report, it says: 'The most recent match Maccabi played in the UK was against West Ham in the Europa Conference League on 9 November 2023. This was part of the '23-24 European campaign. It marked Maccabi Tel Aviv’s last competitive appearance on UK soil to date.'
"That is in the intelligence report, but that did not happen. West Ham have never played Maccabi Tel Aviv.

[1145]
Chief constable's survival hangs in the balance
For [Chief Constable] Craig Guildford it is hugely embarrassing to have to admit that his officers did use AI - after he had personally told MPs that they did not.
That use of Microsoft Copilot led to information going into the report that referred to a game between West Ham and Maccabi Tel Aviv that had never actually happened.

I wouldn't allow the scumbags into Britain at all.

PaperBlueCornflower · 15/01/2026 18:20

SerendipityJane · 14/01/2026 14:59

Everyone ?

I do not think it is hallucinating. I think it is deliberately telling lies.

I think it should be called ChatGPTLies or CopilotLies or whatever.

I am biased, I do not use it.

Or, to be less grouchy, perhaps ChatGPTFiction would work. Fiction can be fun and even informative and inspiring but, at face value, it is not true and we do not believe it.

If I read a work of fiction I might really enjoy it without thinking it is factual but if the same story was presented as a memoir I might be at risk of drawing problematic conclusions about a serious health condition of mine, or someone else's. For example.

SerendipityJane · 15/01/2026 18:24

PaperBlueCornflower · 15/01/2026 18:20

I do not think it is hallucinating. I think it is deliberately telling lies.

I think it should be called ChatGPTLies or CopilotLies or whatever.

I am biased, I do not use it.

Or, to be less grouchy, perhaps ChatGPTFiction would work. Fiction can be fun and even informative and inspiring but, at face value, it is not true and we do not believe it.

If I read a work of fiction I might really enjoy it without thinking it is factual but if the same story was presented as a memoir I might be at risk of drawing problematic conclusions about a serious health condition of mine, or someone else's. For example.

It can't "deliberately" tell lies, because it isn't sentient.

However, a fundamental characteristic of what we might call "intelligence" (which doesn't really have a definition) is the ability to lie, mislead or omit the truth in order to pursue a strategy.

To the extent that a very cheap and simple test to see if you are really being sold "AI" or just a very expensive version of Office is to get it to lie.

Incidentally, the reverse is not true. Systems that lie are not axiomatically intelligent.

TeenLifeMum · 15/01/2026 18:27

We tried to do our annual review using copilot (because senior team keep telling us it’ll save us time). It was very positive… but made up numbers to add to ours to make our results look better. Totally fictional. I can see how this could happen and it’s a great example of how senior people are trusting AI and not listening to us lower employees who are saying it’s not safe or fit for purpose.

CarminaBiryani · 15/01/2026 18:46

Most organisations have been slow in dragging their heels in developing AI policies. We had a seminar where they told us how many questions we'd asked chat GPT over the last year, oh my god it was hundreds of thousands.

EasternStandard · 15/01/2026 19:11

PetuniaT · 15/01/2026 17:54

Absolutely! So why does just about everyone seem hell-bent on using AI?

Because it’s easier than doing the work. People will be more careful if it can ends in job loss.

PaperBlueCornflower · 15/01/2026 19:34

SerendipityJane · 15/01/2026 18:24

It can't "deliberately" tell lies, because it isn't sentient.

However, a fundamental characteristic of what we might call "intelligence" (which doesn't really have a definition) is the ability to lie, mislead or omit the truth in order to pursue a strategy.

To the extent that a very cheap and simple test to see if you are really being sold "AI" or just a very expensive version of Office is to get it to lie.

Incidentally, the reverse is not true. Systems that lie are not axiomatically intelligent.

I can see your point but I disagree - sorry if this is a clumsy way to put it - because you are disagregating it from it's sentient components, the people who designed it and who put the lever in the cage.

In experiments with rats there have been levers that produce food or other things reliably, unreliably or randomly. The experimenters create the mechanisms.

A lever the rat thinks is going to deliver X but sometimes delivers Y.

Sadworld23 · 15/01/2026 20:30

I was searching for the results of an investigation relating to a relative.
.I tried several times with combinations of the person's name, eg MrX and the type of investigation and area will call villageD

I was unable to get the redult for whatever reason but I decided to do one lost shot using AI and typed on MrX and investigation type and AI said MrX is a well known investigator of that type of investigation in the area of villageD. Totally have no faith in that type of AI now.

Use it at your peril.

Btw: MrX was not an investigator if any sort nor was anyone with the same name, its rare enough to be certain about that.

Grammarnut · 16/01/2026 00:16

ThrowingDi · 14/01/2026 12:18

How big of a deal is this, as in will people be sacked? Or slap on the wrist?

That's sackable. And he misled a parliamentary committee, too. Oops. Don't rely on AI.

Swipe left for the next trending thread