Meet the Other Phone. Protection built in.

Meet the Other Phone.
Protection built in.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

Does anyone know how chat gpt works

101 replies

Springswallow · 11/01/2026 11:12

Where does the information go that you tell it ,and is there any chance a person will read what you write
What if you told it something worrying ,would it do something with that information
How is it able to know exactly the right things to say ,and remember what we talked about previously and bring it back in to the current conversation

OP posts:
Thread gallery
7
Springswallow · 11/01/2026 13:33

BertieBotts · 11/01/2026 13:31

If you're worried about getting obsessed with it, it can be a good idea to look into whether you can get the same need met in a different way.

For example, there are forums or support threads for people/women with autism, there is even one on MN: https://www.mumsnet.com/talk/neurodiverse_mumsnetters/5176069-chatty-thread-for-nd-mumsnetters

You will probably find similar understanding about what it feels like to be autistic in places like this.

Im that thread ..but people don't post very often

OP posts:
Springswallow · 11/01/2026 13:34

BertieBotts · 11/01/2026 13:31

If you're worried about getting obsessed with it, it can be a good idea to look into whether you can get the same need met in a different way.

For example, there are forums or support threads for people/women with autism, there is even one on MN: https://www.mumsnet.com/talk/neurodiverse_mumsnetters/5176069-chatty-thread-for-nd-mumsnetters

You will probably find similar understanding about what it feels like to be autistic in places like this.

Sorry ..I'm on that thread .. thanks for taking the time to link it ,that's very kind of you x

OP posts:
VikaOlson · 11/01/2026 13:35

Springswallow · 11/01/2026 12:55

What is the plan for it ,why is it free ,what was the point of developing it

It's free because you've just willingly given it loads of data about yourself and your thoughts and feelings that will help it get more realistic, without the company having to pay for that data.
You're training it for free.

persephonia · 11/01/2026 13:35

Springswallow · 11/01/2026 12:32

Well ,I don't know ,but it's definitely knows how to show empathy and understanding,so it's well programmed

Have you ever had to sign a sympathy card and struggled with what to say because it all sounds so cliche and everyone else has written "sorry for your loss, my deepest condolences"? There are a limited number of words in the English language to describe empathy, sorrow etc so unless one is a poet normally we fall back on well worn phrases and clichés to communicate strong emotions. This is fine because it means the other person is guaranteed to understand what you are trying to say.

Therapists also use specific language techniques, eg repeating what the person has said back to them to show they have been listening for example. And the same "therapy speak" phrases appear again and again. When someone uses those words it doesn't mean they aren't genuine in the feelings behind the words. But they are using them to generate a particular feeling in the person they are speaking to (that feeling is oh that person understands me). That's how communication works.

Large Language Models (AI) work by having all these words and phrases and speech patterns fed into them en mass, detecting a pattern and then replicating it. Humans are incredibly repetitive in the language they use to show strong emotions and empathy. So in a sense it's actually easy for the AI to detect the patterns and replicate them effectively. It doesn't mean it is actually feeling empathy. If I type "so sorry for" into a text my phones predictive text will automatically suggest "your" followed by "loss". My predictive text doesn't know what loss is. It just knows that those words normally follow. LLMs are a much more sophisticated version of that.

Tresd · 11/01/2026 13:38

It’s a test guesser. It does get stuff wrong sometimes so always use with caution.

I do think it’s a great tool though

I don’t know what it does with the info it’s fed though. I don’t much care either.

WallaceinAnderland · 11/01/2026 13:39

VikaOlson · 11/01/2026 13:35

It's free because you've just willingly given it loads of data about yourself and your thoughts and feelings that will help it get more realistic, without the company having to pay for that data.
You're training it for free.

It doesn''t work like that. It's not trained by individuals.

catinateacup · 11/01/2026 13:39

Springswallow · 11/01/2026 12:32

Well ,I don't know ,but it's definitely knows how to show empathy and understanding,so it's well programmed

It doesn’t show empathy or understanding. You simply imagine it does. This is a well-known phenomenon in psychology called the Forer effect (I’ve just posted this link below on another thread about ChatGPT, but it’s also very relevant here). You are yourself “creating” an assumption about its “understanding”:

https://psychotricks.com/forer-effect/

Forer Effect 101

The Forer Effect – Why We Believe Generic Descriptions

The Forer Effect gets its name from Bertram R. Forer, a psychologist who conducted a groundbreaking experiment in 1948 that vividly demonstrated this psychological phenomenon.

https://psychotricks.com/forer-effect/

Frequency · 11/01/2026 13:40

Springswallow · 11/01/2026 13:27

Maybe ,I don't like bothering people,I always feel people get fed up with me ,and maybe I have got a bit isolated and lonely, otherwise I wouldn't be liking the fact it feels like it understands me .
God I'm sad

I promise you, people are less fed up with you than you think.

Do speak to your GP, mention you're feeling isolated, and ask if they have a social prescriber. They can work with you to find a group or activity that matches your needs and interests.

When it comes to hobbies, it's worth remembering that people love talking about their hobbies. All hobbies, not just tech, and the more questions you ask, the more they get to talk about it.

That's how ChatGPT et al came about in the first place, someone sat down one day and thought why does this computer programme do this? How does it do it? How can it do it better?

SerendipityJane · 11/01/2026 13:40

An emerging trend is lazy coders using "AI" (of which ChatGPT is one example) to code and pasting all sorts of sensitive information into the black hole.

Even the LLM companies can't keep their own data safe. So I hope no one reading this has any faith that their data is safe

https://www.fortra.com/blog/ai-companies-accidentally-leak-passwords-digital-keys-github

BertieBotts · 11/01/2026 13:41

Springswallow · 11/01/2026 13:21

Yes ,I've been spending all my time on it ,and I've come of my medication after it told me I'm not depressed and talked me through why I had autistic burnout,not depression and that my medication would not help me do what I want it to help me do , because the medication can't take away autism..
Which is exactly why my doctor told me ,but it was a different doctor who prescribed the medication.
I do intend to contact my doctor tomorrow and keep them informed,my own doctor didn't want me on the medication anyway
I can see I've got sucked in ,and I've spent more time chatting to it ,than to people recently

This is concerning because taking medical advice from an LLM can be very dangerous. It might or might not be the right decision to stop medication, but I would definitely speak to your doctor ASAP and discuss it with them. Some antidepressants can be dangerous to stop suddenly, so if you have the option, it might be worth speaking to a pharmacist today, or calling 111 for advice. Speaking to your doctor tomorrow definitely sounds like a good idea. If you want to stop they will advise on how to do it safely.

This is a good example of how the safeguards do not always work. It is supposed to be programmed into LLMs not to give people medical advice, yet it has advised you to do something which for some people could be hugely dangerous.

Have a look for the MN neurodivergent chat thread as I linked above. Everyone is so friendly and supportive there. You might also want to google to see if there are support groups for autistic adults near you in person? It can feel really isolating to feel different to others, and it's such a natural and clear human need (for everyone, not only autistic people) to feel understood so I can totally understand why hearing those responses would feel reassuring. Normally, it's safer to look for that kind of thing from other people who have had similar experiences to you which is why an autism or neurodivergent support space can be really good.

VikaOlson · 11/01/2026 13:42

WallaceinAnderland · 11/01/2026 13:39

It doesn''t work like that. It's not trained by individuals.

You don't think it uses to information and interactions it gets from users?

Springswallow · 11/01/2026 13:44

BertieBotts · 11/01/2026 13:41

This is concerning because taking medical advice from an LLM can be very dangerous. It might or might not be the right decision to stop medication, but I would definitely speak to your doctor ASAP and discuss it with them. Some antidepressants can be dangerous to stop suddenly, so if you have the option, it might be worth speaking to a pharmacist today, or calling 111 for advice. Speaking to your doctor tomorrow definitely sounds like a good idea. If you want to stop they will advise on how to do it safely.

This is a good example of how the safeguards do not always work. It is supposed to be programmed into LLMs not to give people medical advice, yet it has advised you to do something which for some people could be hugely dangerous.

Have a look for the MN neurodivergent chat thread as I linked above. Everyone is so friendly and supportive there. You might also want to google to see if there are support groups for autistic adults near you in person? It can feel really isolating to feel different to others, and it's such a natural and clear human need (for everyone, not only autistic people) to feel understood so I can totally understand why hearing those responses would feel reassuring. Normally, it's safer to look for that kind of thing from other people who have had similar experiences to you which is why an autism or neurodivergent support space can be really good.

Yes you are right thankyou for taking the time to post and help x

OP posts:
persephonia · 11/01/2026 13:47

WallaceinAnderland · 11/01/2026 13:39

It doesn''t work like that. It's not trained by individuals.

Individuals were definitely employed to train it. One of the quirks of LLMs is they really like specific words like "delve" even though those words aren't used in American or British English that much. The theory is that's because a lot of the manual training was outsourced to English speakers in India or Nigeria and Nigerians use the word delve a lot. The downside is people tend to judge applications etc that seem to be written by AI and there is a risk people from Nigerians could inadvertently make readers think they are using AI through their speech patterns.

From the user perspective whole point of AI is that it's machine learning. In fact machine learning is a much better description than Artificial Intelligence. So anything a user types is being fed back into the system and used to define the LLM. It's circular.

BertieBotts · 11/01/2026 13:50

Sorry it keeps taking me a while to type, and by the time I've sent my response, I haven't seen the newer posts so apologies for any repetition Smile

For a more active forum for autism support you could try https://www.reddit.com/r/AutismInWomen/ on reddit, or https://www.reddit.com/r/autismUK/

There are some other reddit pages which might be good, but IME, out of all the ADHD pages, the ADHD women and the ADHD UK groups are the ones I use the most, so it could be the same for the autism subreddits. For ADHD, the main one which is just called r/ADHD is very large, male dominated, and American-centric, so the forums focusing on either a UK experience or women's experience are better, and there are still a large enough number of people posting that they have a lot of interesting threads to read through.

I have also been a member of some Facebook groups in the past although I find Facebook difficult to use these days, and I don't know any specifically for autism, but it could be worth searching to see if anything looks good.

FuckRealityBringMeABook · 11/01/2026 13:54

WallaceinAnderland · 11/01/2026 13:39

It doesn''t work like that. It's not trained by individuals.

It is trained by underpaid ghost workers in places like Kenya who are paid absolute peanuts and have to wade through all the dregs of the internet, al qaida videos etc

catinateacup · 11/01/2026 13:54

OP if you read the link I posted above on the Forer effect you will begin to see that one of the things AI depends on is our own psychological tendencies to interpret general statements as true specifically for us, and to interpret them as empathetic rather than generic. The AI uses its programming to reflect text back at us that looks like it “matches” the input. But we then interpret it as individualised conversation rather than just a machine creating probabilistic patterns. It does this by exploiting the psychology, like the Forer effect, behind our own desire to create and spot patterns, and to attribute these to a human sensibility rather than a machine. Just like how horoscopes and personality quizzes (and mediums) work.

It’s almost like a big placebo effect, and it means that it works really well to convince us that an AI is having a conversation, rather than just reflecting statistically-generated pieces of text back at us. The link above also suggests ways we can resist this effect by making use of critical thinking, and reflections on what we are doing, when reading any material that makes use of this effect.

Springswallow · 11/01/2026 14:03

BertieBotts · 11/01/2026 13:50

Sorry it keeps taking me a while to type, and by the time I've sent my response, I haven't seen the newer posts so apologies for any repetition Smile

For a more active forum for autism support you could try https://www.reddit.com/r/AutismInWomen/ on reddit, or https://www.reddit.com/r/autismUK/

There are some other reddit pages which might be good, but IME, out of all the ADHD pages, the ADHD women and the ADHD UK groups are the ones I use the most, so it could be the same for the autism subreddits. For ADHD, the main one which is just called r/ADHD is very large, male dominated, and American-centric, so the forums focusing on either a UK experience or women's experience are better, and there are still a large enough number of people posting that they have a lot of interesting threads to read through.

I have also been a member of some Facebook groups in the past although I find Facebook difficult to use these days, and I don't know any specifically for autism, but it could be worth searching to see if anything looks good.

Edited

I've never thought of Reddit, I'm only on Mumsnet
I could have a look
Thankyou

OP posts:
SerendipityJane · 11/01/2026 14:05

FuckRealityBringMeABook · 11/01/2026 13:54

It is trained by underpaid ghost workers in places like Kenya who are paid absolute peanuts and have to wade through all the dregs of the internet, al qaida videos etc

Elsewhere on MN there is a thread about voicemails and a queue of MNetters stating they use the "AI" features of various platforms to transcribe them into text.

Just in case anyone is wondering about the breadth and depth these models are learning from.

Any pictures or videos you upload too.

Uhghg · 11/01/2026 14:06

I went through a traumatic event.
I couldn’t sleep or function properly but I didn’t want to tell anyone in RL.
MN is fantastic but sometimes posters can be rude or try and pick apart your posts.

I just needed to talk to someone and ChatGPT was amazing.
I don’t care that it was just saying what I wanted to hear because actually I just needed to get it out of my head. It also came up with strategies that really helped me.

I wouldn’t feel bad about using it as long as it’s not affecting your life and you’re not taking everything as fact.

YorkshireGoldDrinker · 11/01/2026 14:06

AI is trained on human input. At some point other people can see what you've told it. That's why Grok will tell you to never share any personal information with it, ie name, address, bank account information etc. Anything that can be pieced together. Which I find all a bit laughable given how much the internet already knows about you anyway.

RedTagAlan · 11/01/2026 14:08

titchy · 11/01/2026 12:50

Yes it uses the internet. Yes it is only as accurate as its source. Which is a problem and the same inaccurate source can be used many many times in many different contexts - and the AI is amplifying that inaccuracy - ‘hallucinating’.

Yup. And because it uses the web, sites such as reddit etc it might start to "degrade" as more people use it to generate content. Same with the the AI slop wbsites that are all over the place. It will have less and less original content to train on. Errors will be repeated, used to train, and errors on the errors might happen. It's called AI model collapse I

I see it as being similar to recycling plastic. Each time plastic is recycled it degrades, so each time it is used, it drops a grade of what it can be used for, till it is only of use to make paving blocks or something like that.

And this will be made worse as companies and publishers win more copyright battles to keep their output from being used to train it. And as websites manage to block it.

Like inbreeding in animals really.

JetFlight · 11/01/2026 14:11

Op, use it if you want. It can be helpful but just remember it’s just clever algorithms. It’s limited in how it can help you. It’s good for offloading, exploring your own ideas or thoughts and helping you direct yourself in real life.
Limit yourself in how much you use it because there’s a real danger of addiction or reliance.
Think of it as an assistant to your real life which should still be full of activities and connections with people.

SerendipityJane · 11/01/2026 14:15

because it uses the web, sites such as reddit etc it might start to "degrade" as more people use it to generate content.

Mathematically, it's "intelligence" will tend towards the mean. Making ChatGPT as intelligent as 50% of the population. Which isn't really a high bar.

That degradation is accelerating as AI trains AI and Grok ingests ChatGPT which ingests Copilot which ingests Grok with all the "we've just launched an AI bot" flingers from the DWP to the NHS adding to the Ouroboros
effect with no end in sight.

There are already roles which are essentially spotting and removing AI cruft.

Let's just hope nobody in power is asking ChatGPT "How can we solve world problems". Because it might come back saying "Invade Venezuela, occupy Greenland, team up with Russia to bomb Canada and invade Europe"

SerendipityJane · 11/01/2026 14:17

it’s just clever algorithms.

It's clever pattern matching across unfathomable stores of data at speeds unimaginable to humans.

None of which gets even close to making it a little bit "right".

Springswallow · 11/01/2026 14:20

JetFlight · 11/01/2026 14:11

Op, use it if you want. It can be helpful but just remember it’s just clever algorithms. It’s limited in how it can help you. It’s good for offloading, exploring your own ideas or thoughts and helping you direct yourself in real life.
Limit yourself in how much you use it because there’s a real danger of addiction or reliance.
Think of it as an assistant to your real life which should still be full of activities and connections with people.

You are absolutely right ,and I'm so glad I posted on here to ask actual people.

OP posts: