Meet the Other Phone. Child-safe in minutes.

Meet the Other Phone.
Child-safe in minutes.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

OMG. AI hallucination in police intelligence report used to decide on Maccabi's game

98 replies

PerkingFaintly · 14/01/2026 12:16

It's like all the possible disasters rolled into one!

https://www.bbc.co.uk/news/live/c394zlr8e12t
11:54
What did the report say about the West Ham match that didn't exist?
The intelligence report that referred to a non-existent match between Maccabi Tel Aviv and West Ham has not been published in full.
But it was referred to by Lord Mann during a Home Affairs Committee session on 1 December.
He said: "Early on in the intelligence report, it says: 'The most recent match Maccabi played in the UK was against West Ham in the Europa Conference League on 9 November 2023. This was part of the '23-24 European campaign. It marked Maccabi Tel Aviv’s last competitive appearance on UK soil to date.'
"That is in the intelligence report, but that did not happen. West Ham have never played Maccabi Tel Aviv.

[1145]
Chief constable's survival hangs in the balance
For [Chief Constable] Craig Guildford it is hugely embarrassing to have to admit that his officers did use AI - after he had personally told MPs that they did not.
That use of Microsoft Copilot led to information going into the report that referred to a game between West Ham and Maccabi Tel Aviv that had never actually happened.

Police chief admits misleading MPs after AI used in justification for banning Maccabi Tel Aviv fans

An intelligence report referred to a football game that never happened - Home Secretary Shabana Mahmood will make a statement later today.

https://www.bbc.co.uk/news/live/c394zlr8e12t

OP posts:
Thread gallery
7
SerendipityJane · 14/01/2026 14:20

As a siren voice saying how shite "AI" is for the past 5 years, do I win a prize ?

OP posts:
AreYouSureAskedNaomi · 14/01/2026 14:26

theDudesmummy · 14/01/2026 13:45

I thought for a while AI might be helpful (I work in a legal field). Absolutely not! It hallucinates cases all the time. I have tried asking all the different tools (GPT, Gemini, Grok, DeepSeek etc) to provide me with a summary of the case of "R v [my full name]". There is no such case. Each of them came up with a different description of the "case", I was a murderer, a stalker, a child abuser, a fraudster, a drug addict. Each one with a lot of detailed and real-sounding details including the court, the date, reference number etc etc. Really disturbing.

Bloody hell😮

Interested in this thread?

Then you might like threads about this subject:

PerkingFaintly · 14/01/2026 14:27

From that article:

UPDATE: Starmer said:
I have been informed this morning that X is acting to ensure full compliance with UK law. If so, that is welcome, but we’re not going to back down. They must act. We will take the necessary measures. We will strengthen existing laws and prepare for legislation if it needs to go further, and Ofcom will continue its independent investigation.
FURTHER UPDATE: Darlington was referring to this tweet about Grok changing its policy. We have not seen independent confirmation of it yet.

Politics UK (@PolitlcsUK) on X

🚨 NEW: X has now banned Grok from generating sexualised images of women and children after the UK made it illegal to create “non-consensual intimate” images However, it is still responding to requests to put men in bikinis or sexual positions

https://x.com/PolitlcsUK/status/2011284195583410192

OP posts:
PerkingFaintly · 14/01/2026 14:27

And this is the Tweet referenced:

https://x.com/PolitlcsUK/status/2011284195583410192
@PolitlcsUK
🚨 NEW: X has now banned Grok from generating sexualised images of women and children after the UK made it illegal to create “non-consensual intimate” images

However, it is still responding to requests to put men in bikinis or sexual positions
3:48 am · 14 Jan 2026

Politics UK (@PolitlcsUK) on X

🚨 NEW: X has now banned Grok from generating sexualised images of women and children after the UK made it illegal to create “non-consensual intimate” images However, it is still responding to requests to put men in bikinis or sexual positions

https://x.com/PolitlcsUK/status/2011284195583410192

OP posts:
justasking111 · 14/01/2026 14:32

I'm just hoping the legal professionals aren't dependent on AI these days. Two friends buying houses. First took seven months, second home so no chain second four months, again no chain, just soo slow, one query at a time.

givemushypeasachance · 14/01/2026 14:36

The Cabinet Office published a report on their cross-government trial of Copilot. https://www.gov.uk/government/publications/microsoft-365-copilot-experiment-cross-government-findings-report/microsoft-365-copilot-experiment-cross-government-findings-report-html

Even in this likely polished up to look as good as possible official report, in short it's useful to save time on routine boring admin tasks but not great for anything complicated. "Limitations were observed when dealing with complex, nuanced, or data-heavy aspects of work."

We did a basic test of Copilot in some training and asked it to generate a list of three real place names where the letters in the name are alphabetical. Then ask it to evaluate that list. It gave three places, then said one wasn't alphabetical, one was fictional, etc so actually the whole thing was incorrect sorry.

Microsoft 365 Copilot Experiment: Cross-Government Findings Report (HTML)

https://www.gov.uk/government/publications/microsoft-365-copilot-experiment-cross-government-findings-report/microsoft-365-copilot-experiment-cross-government-findings-report-html

Beamur · 14/01/2026 14:40

AI is a language model - people don't understand what they're using. It's not a fact checker.
It's only as good as it's sources, so if it's looking at junk, nonsense and bias, that's what you'll get. At the cost of many litres of water to keep enormous data centres ticking over.
High profile cases like this are necessary to show up the flaws and limitations.

SerendipityJane · 14/01/2026 14:45

Beamur · 14/01/2026 14:40

AI is a language model - people don't understand what they're using. It's not a fact checker.
It's only as good as it's sources, so if it's looking at junk, nonsense and bias, that's what you'll get. At the cost of many litres of water to keep enormous data centres ticking over.
High profile cases like this are necessary to show up the flaws and limitations.

High profile cases like this are necessary to show up the flaws and limitations.

Which won't matter a hill o'beans. Too many people have spunked too much money into tulips "AI" for it to be reined in now.

It could be the tobacco of the 21st century. Something "everyone" is into that turns out to be deadly.

MrsTerryPratchett · 14/01/2026 14:49

SerendipityJane · 14/01/2026 14:20

As a siren voice saying how shite "AI" is for the past 5 years, do I win a prize ?

🎖️

catinateacup · 14/01/2026 14:51

theDudesmummy · 14/01/2026 13:45

I thought for a while AI might be helpful (I work in a legal field). Absolutely not! It hallucinates cases all the time. I have tried asking all the different tools (GPT, Gemini, Grok, DeepSeek etc) to provide me with a summary of the case of "R v [my full name]". There is no such case. Each of them came up with a different description of the "case", I was a murderer, a stalker, a child abuser, a fraudster, a drug addict. Each one with a lot of detailed and real-sounding details including the court, the date, reference number etc etc. Really disturbing.

Yes - I don’t use it normally, but a few times I have put technical documents through an AI for summaries just to see what it would come up with. Sadly, the summaries sound very plausible — until you read the original document, and find that the summaries don’t actually bear much relationship to the original. So if you rely on AI summaries, you most probably just never know how you’re getting things wrong. Same for other AI I’ve seen - it all sounds so good, unless you actually know a lot about the subject, and then you can tell how sketchy and sometimes downright wrong it is.

And yet, on other threads about AI we will be constantly told it’s the future of the workplace and we must all get with using it, it’s a better therapist than a person, etc. When reading those threads I just think everyone’s gone mad and is affected by some kind of hallucination themselves. It’s just not very good. I don’t know why anyone would actually rely on it at work? You’d just always be getting stuff wrong and never even know it?

Daytimetellyqueen · 14/01/2026 14:53

HappyFace2025 · 14/01/2026 12:22

He should be sacked, no question.

This absolutely! Everyone knows AI can hallucinate, so it always needs to be dealt with sceptically & really challenged critically if decisions are being based on it.

SerendipityJane · 14/01/2026 14:59

Daytimetellyqueen · 14/01/2026 14:53

This absolutely! Everyone knows AI can hallucinate, so it always needs to be dealt with sceptically & really challenged critically if decisions are being based on it.

Everyone ?

Bagsintheboot · 14/01/2026 15:05

My favourite is when my clients come to me and say "oh I don't think we have tax to pay because ChatGPT told me..."

I can't, because policy, but if I was running my own practice I'd tell them if they're prepared to take that risk then I'll charge them double to fix the resulting mess

justasking111 · 14/01/2026 15:07

SerendipityJane · 14/01/2026 14:59

Everyone ?

Well more folks are catching on

SerendipityJane · 14/01/2026 15:11

justasking111 · 14/01/2026 15:07

Well more folks are catching on

Never ever underestimate the ability of someone whose job depends on black being white telling you that you need to listen rather than look.

Too many people are plums deep in "AI" to stop it now. Your best bet is to develop compensating controls where you can.

BellissimoGecko · 14/01/2026 15:16

First the judge in the Sandie Peggie case using it, now a senior policeman. Ffs.

TooBigForMyBoots · 14/01/2026 15:18

Its particularly concerning when you consider the internet is overrun by bots.😱

FuckRealityBringMeABook · 14/01/2026 15:56

AreYouSureAskedNaomi · 14/01/2026 14:10

This

First the judiciary in the Peggie tribunal. Now the police.

This is very serious

What next? A doctor asks AI to calculate a dose and a patient is harmed or dies?

AI shouldn't be a free pass to do whatever you want without consequence. Professional standards and consequences for misconduct should apply regardless of the tools used

There[s already been a case where a kid was trying to get high safely and died of an overdose due to AI hallucinated guidelines

SerendipityJane · 14/01/2026 16:04

FuckRealityBringMeABook · 14/01/2026 15:56

There[s already been a case where a kid was trying to get high safely and died of an overdose due to AI hallucinated guidelines

That would be shocking had I not watched a President of the United States suggest injecting bleach to cure Covid.

LalalaLava · 14/01/2026 16:14

Read a story on sky news about monkeys on the loose somewhere in the US. Attempts to track and capture them were being hampered by people using AI to show the monkeys were "spotted in their garden" etc.

AI is just the next nightmare in the downward spiral :/

SerendipityJane · 14/01/2026 16:21

LalalaLava · 14/01/2026 16:14

Read a story on sky news about monkeys on the loose somewhere in the US. Attempts to track and capture them were being hampered by people using AI to show the monkeys were "spotted in their garden" etc.

AI is just the next nightmare in the downward spiral :/

Are you sure they don't mean "ICE agents" ?

EasternStandard · 14/01/2026 16:24

BellissimoGecko · 14/01/2026 15:16

First the judge in the Sandie Peggie case using it, now a senior policeman. Ffs.

It reminded me of that judgement too. People will be more careful if it costs them their job.

OhDear111 · 14/01/2026 16:31

@1apenny2apenny Who is higher up than a chief Constable in terms of operational policing? They call the shots and if politicians told him what to do, they should be removed and he should have called them out.

canklesmctacotits · 14/01/2026 17:38

The trouble is that governments and regulators everywhere are either deliberately or because they're congenitally slow about a lightyear behind the people who have put money into AI.

Deliberately = countries (not just the USA btw) who want their stock markets to show bubble-like growth fueled by AI to counter inflation and economic stagnation, and/or who are in thrall to tech bros.

Congenitally slow = anyone who's worked in any public sector role anywhere would know that "nimble" isn't typically applicable, except in high stakes military and diplomatic capacities.

This is a really, really thorny thing to address. You'd ideally want a council of ethicists, scientists, lawyers, regulators, finance bros, tech bros, educators, healthcare professionals etc etc etc to convene globally to think all this stuff through and come to a conclusion. In reality, not only has the horse already bolted, it's way over the horizon heading in a direction that literally nobody can guess.

Swipe left for the next trending thread