Meet the Other Phone. Child-safe in minutes.

Meet the Other Phone.
Child-safe in minutes.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

I have been arguing with Chat GPT, and wondering …

88 replies

OffToSeaInABlizzard · 26/11/2025 12:05

Whether I might at some point switch to chatting with ‘it’ rather than here.

In truth the argument (about changes in school application processes over the last 15 years) was a little one-sided as it’s so polite and immediately acknowledged my point and adapted its opinion. But still - I could potentially have a Style & Beauty chat with it without risking wild hostility over my wardrobe suggestions, or grumpy responses to inferred bad faith when none was intended.

I’m a bit surprised at myself for discovering this thought in my head. Although - I like the variety of voices and opinions in a forum; I wonder if Chat GPT could replicate that?

OP posts:
BedlingtonLint · 26/11/2025 13:38

theDudesmummy · 26/11/2025 13:35

It gets things wrong even when the info is readily available on the internet. My registration with my professional regulatory body is freely available on their website, including reg number, field of expertise and my qualifications. ChatGPT got all completely wrong when I asked it. Just made up complete fiction in all areas. That is hard to understand, it's not just making up things to fill a gap in its knowledge, it is actively ignoring real information. I won't be relying on it for anything at all! (I did try some time ago to use it instead of Google Scholar to look for research papers on a certain topic. It made every one of them up completely).

Edited

It does, but you can ask it to provide you with sources for everything, so you can evaluate everything it's telling you. You can also tell it what kinds of sources you want to see, so you can weed out anything of low quality.

StruggleFlourish · 26/11/2025 13:39

theDudesmummy · 26/11/2025 13:35

It gets things wrong even when the info is readily available on the internet. My registration with my professional regulatory body is freely available on their website, including reg number, field of expertise and my qualifications. ChatGPT got all completely wrong when I asked it. Just made up complete fiction in all areas. That is hard to understand, it's not just making up things to fill a gap in its knowledge, it is actively ignoring real information. I won't be relying on it for anything at all! (I did try some time ago to use it instead of Google Scholar to look for research papers on a certain topic. It made every one of them up completely).

Edited

Completely agree. GTP can do some great things, and it can assess and analyze information a lot quicker, and give an answer or solution immediately, and it can be a great tool however, it is not always right! And it's one thing if it says "I don't have enough information about this or based on the information that I have..."But sometimes it won't do that and it'll just give you a very factual sounding answer that you assume is correct and it turns out it's totally making it up. I have noticed this in fact, sometimes I use GTP just to prove that it is not infallible.
I have a friend who uses it all the time, It went from being a handy tool to something that he uses so often I don't even think he uses his real brain anymore. If he has any question at all he'll immediately go to chat GTP and I'm thinking, what the heck has happened to you? You're losing your ability to even make a simple decision on your own because you're so reliant on this AI technology it's ridiculous.
And whatever GTP tells him is what he considers to be the truth, so I have used it for the first time recently, just to show that it isn't always right. I'm not sure if that's gotten through to him yet or not, but it has been interesting to chat with a little bit, so I have a bit of experience with this.

GarlicHound · 26/11/2025 13:39

SlipOfFinger · 26/11/2025 12:50

London taxi drivers that had 'the knowledge' suffered dementia after using GPS, is one example.

Suspect you might have muddled a few stories up, there.

Alzheimer's researchers are studying London cabbies because The Knowledge enlarges the hippocampus, which is shrunken by dementia.

Studies have shown a negative correlation between lifetime GPS use and performance on spatial memory tasks.

The subjects of the second study could not be London cabbies, who are definitely not lifetime GPS users.

Empress13 · 26/11/2025 13:39

Sometimes I wonder if some of the posts on here are human they’re batshit crazy 🤪

theDudesmummy · 26/11/2025 13:44

I asked ChatGPT why it gave incorrect information about me when the correct information is readily available. It told me that information about well-known figures tends to be more reliable, but that I am a "low-frequency data blur" and so my information can get "lost". That's me told then! I've been called a lot of things but never a low-frequency data blur before!

OffToSeaInABlizzard · 26/11/2025 13:47

I’m actually not bad at spotting AI generated fake threads, @Empress13. One this morning was totally blatant, but people were posting heartfelt responses. I reported it and MNHQ took it down and banned the … entity.

OP posts:
OffToSeaInABlizzard · 26/11/2025 13:48

Would be a great username, @theDudesmummy!

OP posts:
theDudesmummy · 26/11/2025 13:48

I challenged it on making things up and it said:

Your feedback highlights exactly why users must be cautious with AI.

  • For Creative Writing: My ability to make things up is a feature.
  • For Facts/Biography: It is a dangerous bug.

I don't think enough people understand this.

BedlingtonLint · 26/11/2025 13:52

theDudesmummy · 26/11/2025 13:48

I challenged it on making things up and it said:

Your feedback highlights exactly why users must be cautious with AI.

  • For Creative Writing: My ability to make things up is a feature.
  • For Facts/Biography: It is a dangerous bug.

I don't think enough people understand this.

If you put it into thinking mode and ask it for sources, the mistakes are few and far between, although it's still not infallible, but I'm not sure if thinking mode is available on the free version? And that's ChatGPT specific anyway.

DeanStockwell · 26/11/2025 14:09

SlipOfFinger · 26/11/2025 12:50

London taxi drivers that had 'the knowledge' suffered dementia after using GPS, is one example.

I have never used it or AI type things with the exception of Google and predictive text but I agree with you.
Before mobile phones were popular I could remember all of my family's and friends landlines numbers and full postal address, now because I don't have to dial / write them down there is not a hope in hell that I can remember them .
It's the same for spelling too , I rely far to much on predictive text and spell checker .

LavenderBlue19 · 26/11/2025 14:22

Empress13 · 26/11/2025 13:39

Sometimes I wonder if some of the posts on here are human they’re batshit crazy 🤪

Quite a few are not - there are tons of AI threads started, and I'm sure some posters are not real. It's all very odd. I report them when I spot them and they're almost always taken down.

givemushypeasachance · 27/11/2025 14:31

https://www.theguardian.com/technology/2025/nov/26/chatgpt-openai-blame-technology-misuse-california-boy-suicide

"OpenAI said that “to the extent that any ‘cause’ can be attributed to this tragic event” Raine’s “injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by [his] misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT”.

It said that its terms of use prohibited asking ChatGPT for advice about self-harm and highlighted a limitation of liability provision that states “you will not rely on output as a sole source of truth or factual information”."

After amongst other things, ChatGPT offered to help this 16yo boy write his suicide note to his parents and guided him on suicide methods, the makers of it are now arguing in a lawsuit that asking ChatGPT for advice on self-harm is against the terms of service so they can't be blamed for what happened.

ChatGPT firm blames boy’s suicide on ‘misuse’ of its technology

OpenAI responds to lawsuit claiming its chatbot encouraged California teenager to kill himself

https://www.theguardian.com/technology/2025/nov/26/chatgpt-openai-blame-technology-misuse-california-boy-suicide

OffToSeaInABlizzard · 27/11/2025 15:31

They clearly didn’t see that ‘unforseeable’ will inevitably bring them down …

OP posts:
mondaytosunday · 27/11/2025 15:49

You must realise that ChatGPT does but have any original thoughts. It scours the internet and present it to you in a more conversational manner, but it can’t tell if these things are true. As you say it will adjust to your reaction. You are unlikely to get any valuable ‘opinion’. Because it doesn’t have one.

Wickedlittledancer · 27/11/2025 16:53

mondaytosunday · 27/11/2025 15:49

You must realise that ChatGPT does but have any original thoughts. It scours the internet and present it to you in a more conversational manner, but it can’t tell if these things are true. As you say it will adjust to your reaction. You are unlikely to get any valuable ‘opinion’. Because it doesn’t have one.

Not refering to the op, but I’m not sure some people do, some people actually spend their time chatting to it like it’s a person.

Andromed1 · 27/11/2025 17:23

Unfortunately loads of people find ChatGPT fun and interesting. I can often recognise its deeply boring, wordy style in posts on MN and elsewhere, and don't especially look forward to it becoming less recognisable.

OffToSeaInABlizzard · 27/11/2025 17:31

Some of us can manage wordy and boring without AI assistance, thank you very much … 😂

OP posts:
Andromed1 · 27/11/2025 18:35

OffToSeaInABlizzard · 27/11/2025 17:31

Some of us can manage wordy and boring without AI assistance, thank you very much … 😂

🤣

ResusciAnnie · 27/11/2025 18:37

That OP is bleak AF.

wheresmymojo · 27/11/2025 19:49

You can just tell it that you don’t want it to be overly supportive and want it to be objective, or tell it you want it to play devils advocate if you want a different perspective.

I use it a lot for all sorts of chats and it really helps to clarify my own thinking but I do often ask it to knock off the overly supportive behaviour.

ResultsMayVary · 13/01/2026 16:14

I've used it for a form of counselling and also ideas of how to approach something I feel stuck with. I'm getting better at pushing back - I'm naturally overly polite and compliant. I've found it can give me a new perspective but can be overly wordy and tries to lead me in a way an actual therapist never would.
I think I'm most attracted to its immediacy.

OffToSeaInABlizzard · 13/01/2026 16:58

Over Christmas I’ve mostly used it to assess whether something I wanted to buy was likely to be in the sales a week later. It told me the thing I wanted for my DM was 40-70% likely to go on sale - so it might have been sensible to wait, but the uncertainty was killing me so I ignored sense and bought it anyway. It hasn’t gone into the sales, so nothing lost. It also helped me check the global availability of a hat that I fell in love with. It was only on sale on two places - still full price everywhere else. I got the last one from the place I was looking!

But I guess it hasn’t yet become my best friend.

OP posts:
Wickedlittledancer · 13/01/2026 17:07

OffToSeaInABlizzard · 13/01/2026 16:58

Over Christmas I’ve mostly used it to assess whether something I wanted to buy was likely to be in the sales a week later. It told me the thing I wanted for my DM was 40-70% likely to go on sale - so it might have been sensible to wait, but the uncertainty was killing me so I ignored sense and bought it anyway. It hasn’t gone into the sales, so nothing lost. It also helped me check the global availability of a hat that I fell in love with. It was only on sale on two places - still full price everywhere else. I got the last one from the place I was looking!

But I guess it hasn’t yet become my best friend.

Mate it just searches the web and collages rhe info into an answer.

FerrisWheelsandLilacs · 13/01/2026 17:09

But if you have an S&B question, don’t you want to know what other humans think? Isn’t the point for you to know that other humans are thinking your wardrobe choices are deplorable. I could be easily go on to every S&B post and gush about how lovely the look, or how well they’ll suit the item they’re buying but that’s just not useful feedback?

OffToSeaInABlizzard · 13/01/2026 17:59

Ah, @FerrisWheelsandLilacs - despite wasting countless hours on S&B, I wouldn’t be interested in the opinions on my own wardrobe of 99.9% of posters here.

Maybe I didn’t explain well. In the past few years I’ve used various apps to track stock and prices of clothes I’ve fallen in love with, so I’m not unfamiliar with that facility. But it’s quite fun to engage in what seems like a conversation whilst getting that information.

I absolutely don’t need either MN or AI opinions on my choice of clothes. 😂 Of all the countless failures in my life, that’s always been one area where I have complete confidence in my own instincts.

OP posts: