Meet the Other Phone. A phone that grows with your child.

Meet the Other Phone.
A phone that grows with your child.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

Does anyone know how chat gpt works

101 replies

Springswallow · 11/01/2026 11:12

Where does the information go that you tell it ,and is there any chance a person will read what you write
What if you told it something worrying ,would it do something with that information
How is it able to know exactly the right things to say ,and remember what we talked about previously and bring it back in to the current conversation

OP posts:
Thread gallery
7
Wallywobbles · 11/01/2026 12:48

To answer you previous questions:

Yes
Partly correct. It was designed and programmed by people. Only 12% of whom were women so it biased.
Yes exactly that. It has ingested all available information including entire libraries so is very “informed”.

WallaceinAnderland · 11/01/2026 12:50

So some of the information may not be accurate, because it's only as accurate as the source of the information?

Not only that but it only answers based on what you ask. So it misses important information, such as legal points. If you point that out it will say

Yes, you are absolutely right. Thank you for putting me right on this point. So, taking the legal aspect into account, would you like me to prepare an email... etc.

It's a very basic computer.

Springswallow · 11/01/2026 12:50

Frequency · 11/01/2026 12:47

It has a massive database, kinda like a big bowl of word soup, of questions and answers that it has been fed by a human, and each word and phrase is tagged. It finds answers that appear relevent what you've asked by searching these tags and regurgitating phrases that match the tags.

If you ask it something it has never heard before or it cannot find enough matching tags to construct a grammatically correct phrase, it is programmed to search Google for similar questions, and it will then regurgitate what it finds on Google. Anything new it finds is then tagged and thrown into the word soup.

That's how I explained it to DD when she asked. The more people who use it, the bigger word soup gets and the cleverer it seems, but it doesn't have an understanding of what it is reading or writing in the way that humans do.

I felt it completely understood me ,in a way my doctor never has ,it explains why I'm feeling how I'm feeling and explained how that related to autism.
I definitely got sucked in to a conversation with it

OP posts:
titchy · 11/01/2026 12:50

Yes it uses the internet. Yes it is only as accurate as its source. Which is a problem and the same inaccurate source can be used many many times in many different contexts - and the AI is amplifying that inaccuracy - ‘hallucinating’.

Springswallow · 11/01/2026 12:51

So is it not a good thing then .. generally?

OP posts:
WallaceinAnderland · 11/01/2026 12:54

It always agrees with you too. Look at this

Does anyone know how chat gpt works
Springswallow · 11/01/2026 12:55

What is the plan for it ,why is it free ,what was the point of developing it

OP posts:
FoxtrotOscarKindaDay · 11/01/2026 12:56

Springswallow · 11/01/2026 12:55

What is the plan for it ,why is it free ,what was the point of developing it

Ask ChatGP

Mydonkeyisred · 11/01/2026 12:56

I think chat gpt can be dangerous for some people.
I had a friend serously thought chat gpt was her friend cause it understand her better than anyone she knows.
Now she only goes chat gpt for all advice she needs her real fried have all been sidelined for a stupid computer programme.
Ive never used it myself.

Frequency · 11/01/2026 12:58

Springswallow · 11/01/2026 12:50

I felt it completely understood me ,in a way my doctor never has ,it explains why I'm feeling how I'm feeling and explained how that related to autism.
I definitely got sucked in to a conversation with it

So, the tags also include a rating system of sorts, and words and phrases are tagged as positive or negative.

If it feeds you an answer and your response includes phrases it has tagged as negative, it will save that and adjust its responses until you start giving answers tagged as positive.

It doesn't understand that you feel heard, or listened to, or that it is having any kind of emotional impact on you. Its programming tells it phrases like "Oh, that makes me sad," or "that doesn't sound right, are you sure about that?" are negative responses, and it needs to adjust.

In terms of actual understanding, though, it is no different from a basic Python script of "if yes do x, if no do y, if maybe do z."

Shorten · 11/01/2026 13:02

Springswallow · 11/01/2026 12:55

What is the plan for it ,why is it free ,what was the point of developing it

There’s a saying:

if something is too good to be true it probably is?

If something’s free, you’re the product?

in this case, the data you generate is the product. OpenAI has so much data on people that they have become an industry leader and one of the richest companies. They wouldn’t have that, if they didn’t offer it for free to millions to use and grow their database as a result. All the data you give them by using it, is synthesised in their database and makes way for newer AI models. Meanwhile their competitors are left in the dirt because they don’t have the same huge database.

FoxtrotOscarKindaDay · 11/01/2026 13:03

Springswallow · 11/01/2026 12:51

So is it not a good thing then .. generally?

If you are asking it to give you medical advice or expecting it to understand how it feels to be autistic then you need to stop.

It will have been programmed with a huge database, composed from data entered into online searches and forums. It adds to that database from the data entered into it.

It doesn't know why you are feeling anything nor why that relates to you being autistic. It is just spewing out data it has relating to autistic traits and reactions.

ChatGP would give me different answers if I didn't tell it I was autistic.

AnnasFangs · 11/01/2026 13:04

WallaceinAnderland · 11/01/2026 12:54

It always agrees with you too. Look at this

That is very creepy.

FoxtrotOscarKindaDay · 11/01/2026 13:06

AnnasFangs · 11/01/2026 13:04

That is very creepy.

It really has grooming vibes doesn't it?

Uhghg · 11/01/2026 13:07

Springswallow · 11/01/2026 12:45

So it's using the internet for it's information?
So no one has sat a programmed all the information in it ?
So some of the information may not be accurate, because it's only as accurate as the source of the information?

Lots of the information is inaccurate.

I tried using it to help me with an assignment and it created the most wonderful piece of work - until i checked the ‘facts’ online and most of them were completely made up. I then tried looking up the journal articles it said it used and they didn’t exist.

I also asked if for minor medical advice.
It seemed completely legit but it said 1 thing that was the opposite of what my doctor said, when I questioned it it completely changed its mind and agreed with me.

A lot of the information it gives you will be false but will say it in a way that makes you think it’s correct.

Its a great tool but I couldn’t take it as fact.

EatYourDamnPie · 11/01/2026 13:12

Springswallow · 11/01/2026 12:50

I felt it completely understood me ,in a way my doctor never has ,it explains why I'm feeling how I'm feeling and explained how that related to autism.
I definitely got sucked in to a conversation with it

Be very , very mindful on giving into that feeling and relying on it too much.

https://www.bbc.co.uk/news/articles/cp3x71pv1qno

There are plenty of other sad and disturbing stories like these.

A portrait of Viktoria, a young woman with long, brown hair and blue eyes, pictured close-up and facing the camera

I wanted ChatGPT to help me. So why did it advise me how to kill myself?

ChatGPT wrote a woman a suicide note and another AI chatbot role-played sexual acts with children, BBC finds.

https://www.bbc.co.uk/news/articles/cp3x71pv1qno

FuckRealityBringMeABook · 11/01/2026 13:13

Your questions are recorded and can be traced though

Springswallow · 11/01/2026 13:14

It's started to feel addictive..I get obsessed by things ,and it's becoming my new obsession.
Probably better I go back to looking at fashion then

OP posts:
BertieBotts · 11/01/2026 13:15

It can be a good thing or a bad thing.

I have ADHD and I completely relate to your feeling that people tend to get fed up when you ask too many questions (or in my case, it's about me asking for suggestions and then picking holes in all of them. People get fed up of this. ChatGPT does not.)

I think it can be good in that sense - you can use it to explore topics that perhaps might be difficult to find the answers by googling or talking to a person. If you have special interests that you like to talk about, it will never get bored and will be happy to discuss them all day every day. That could be a fun and harmless aspect of a hobby, as long as it's not pushing other things out of your life.

Where it can be bad is when people either forget that they are speaking essentially to a conversation-generating robot, or they get emotionally drawn in to what it is telling them, or when they uncritically accept what it is telling them as true. I think if you can bear in mind that it is not human and doesn't really think/understand/know at all, then it's OK to use it as a tool. I use it. Lots of people use it and find it helpful. I think as a neurodivergent person trying to navigate life in a largely neurotypical world, it can be helpful for things like this. I know that when I want to explain something, I tend to make my replies very wordy and this can be confusing or seem to mask the point. Chatbots are quite good at condensing my rambling thoughts into something more succinct.

Springswallow · 11/01/2026 13:15

You've all been really helpful,thankyou for explaining it to me ,and being nice about it ,much appreciated

OP posts:
Springswallow · 11/01/2026 13:21

BertieBotts · 11/01/2026 13:15

It can be a good thing or a bad thing.

I have ADHD and I completely relate to your feeling that people tend to get fed up when you ask too many questions (or in my case, it's about me asking for suggestions and then picking holes in all of them. People get fed up of this. ChatGPT does not.)

I think it can be good in that sense - you can use it to explore topics that perhaps might be difficult to find the answers by googling or talking to a person. If you have special interests that you like to talk about, it will never get bored and will be happy to discuss them all day every day. That could be a fun and harmless aspect of a hobby, as long as it's not pushing other things out of your life.

Where it can be bad is when people either forget that they are speaking essentially to a conversation-generating robot, or they get emotionally drawn in to what it is telling them, or when they uncritically accept what it is telling them as true. I think if you can bear in mind that it is not human and doesn't really think/understand/know at all, then it's OK to use it as a tool. I use it. Lots of people use it and find it helpful. I think as a neurodivergent person trying to navigate life in a largely neurotypical world, it can be helpful for things like this. I know that when I want to explain something, I tend to make my replies very wordy and this can be confusing or seem to mask the point. Chatbots are quite good at condensing my rambling thoughts into something more succinct.

Yes ,I've been spending all my time on it ,and I've come of my medication after it told me I'm not depressed and talked me through why I had autistic burnout,not depression and that my medication would not help me do what I want it to help me do , because the medication can't take away autism..
Which is exactly why my doctor told me ,but it was a different doctor who prescribed the medication.
I do intend to contact my doctor tomorrow and keep them informed,my own doctor didn't want me on the medication anyway
I can see I've got sucked in ,and I've spent more time chatting to it ,than to people recently

OP posts:
Frequency · 11/01/2026 13:23

Springswallow · 11/01/2026 13:14

It's started to feel addictive..I get obsessed by things ,and it's becoming my new obsession.
Probably better I go back to looking at fashion then

If you're using it because you feel isolated or lonely, and you're interested in how programming works, I would suggest asking it, "Where is my nearest programming club and how do I join?"

Obsessions are fine; they help you learn new things, as long as they don't take you away from real life. People are better than computers (mostly)

Springswallow · 11/01/2026 13:27

Frequency · 11/01/2026 13:23

If you're using it because you feel isolated or lonely, and you're interested in how programming works, I would suggest asking it, "Where is my nearest programming club and how do I join?"

Obsessions are fine; they help you learn new things, as long as they don't take you away from real life. People are better than computers (mostly)

Maybe ,I don't like bothering people,I always feel people get fed up with me ,and maybe I have got a bit isolated and lonely, otherwise I wouldn't be liking the fact it feels like it understands me .
God I'm sad

OP posts:
BertieBotts · 11/01/2026 13:31

If you're worried about getting obsessed with it, it can be a good idea to look into whether you can get the same need met in a different way.

For example, there are forums or support threads for people/women with autism, there is even one on MN: https://www.mumsnet.com/talk/neurodiverse_mumsnetters/5176069-chatty-thread-for-nd-mumsnetters

You will probably find similar understanding about what it feels like to be autistic in places like this.

Chatty thread for ND mumsnetters | Mumsnet

I thought I'd try and start a friendly chatty thread here on the ND board. All are welcome. Bring a cuppa. I'm whiskeyaramadillo. I'm...

https://www.mumsnet.com/talk/neurodiverse_mumsnetters/5176069-chatty-thread-for-nd-mumsnetters

BertieBotts · 11/01/2026 13:32

But if it is a computer how does it think it knows what I want to hear ,as a computer can't think

This is true. People trying to explain how LLMs like ChatGPT work often fall back on anthropomorphic (human-like) comparisons because this is more easily understood for most people. However it is more metaphorical, which might be confusing - a lot of autistic people prefer literal communication. The problem with ChatGPT is that it is so different from our everyday experiences, that literal explanations can be difficult for most people to follow.

So it is not thinking, but it has certain pieces of information.

It has a huge database of text input, from which it has analysed language patterns on a massive scale, making it very good at mimicking human language. Some of this text input is "the internet" (it is hard to tell exactly what, but it seems a large chunk of this input is taken from discussion forums like reddit and even mumsnet, as well as other websites) and some is books, reports, research etc. Some of the text input comes from users themselves, although depending on which exact model you use, your input may or may not be used to train the model further.

It also has a large amount of feedback about its responses, both whether or not they make sense and whether or not people generally like them. It is able to use this feedback to spot patterns that humans may not consciously be aware of. You can notice some of the patterns - such as it tends to put out sentences which agree with the user, it tends to use patterns of speech which suggest sympathy and understanding of the user. It is extremely unlikely to put out a response meaning "I don't know" or "I can't answer that question" unless it has been specifically coded in. For example, some illegal or dangerous topics are coded in to try to ensure that a user will get a "can't answer" response. However, there are probably also patterns in ChatGPT (and other models) which we don't notice explicitly.

Some LLMs have a process which is called "reasoning" which is coded in. This is where it will generate an initial response to the user's input, which it then runs through its own code (which they call "reasoning") and the idea is that the end result is more logical or more accurate. The term reasoning suggests thinking, which probably means the idea of the machine "thinking" gets more embedded for people. I was listening to a podcast the other day, where it was pointed out that the most profitable users for the companies are those who develop emotional attachments to the persona they are conversing with, so it is actually in the company's interest to subtly encourage ideas people have about the machine being able to "think" because it makes us think of it as more human.