Meet the Other Phone. Only the apps you allow.

Meet the Other Phone.
Only the apps you allow.

Buy now

Please or to access all these features

Chat

Join the discussion and chat with other Mumsnetters about everyday life, relationships and parenting.

Does anyone know how chat gpt works

101 replies

Springswallow · 11/01/2026 11:12

Where does the information go that you tell it ,and is there any chance a person will read what you write
What if you told it something worrying ,would it do something with that information
How is it able to know exactly the right things to say ,and remember what we talked about previously and bring it back in to the current conversation

OP posts:
Thread gallery
7
IWantClaudiasWardrobe · 11/01/2026 11:16

I know how it works - it's a large language model. It's basically a predictive model. It predicts the next word in the sentence, based on thousands of hours of training. It's been trained (through reading the Internet as I understand it) to predict what word comes next in a sentence.

It doesn't 'understand' anything it's writing.

Springswallow · 11/01/2026 11:19

But it's like talking to person who understands..how can it sound so human

OP posts:
EatYourDamnPie · 11/01/2026 12:10

Springswallow · 11/01/2026 11:19

But it's like talking to person who understands..how can it sound so human

Because it’s giving you the replies you want to hear/asked for , which doesn’t mean there’s any actual empathy or understanding behind it.

Hungrycaterpillarsmummy · 11/01/2026 12:11

It's a computer

HarvestMouseandGoldenCups · 11/01/2026 12:13

No. Nobody is going to call anyone or report you for anything because you told Chat GBT something. It reads the data you put in and constructs a reply based on the most common or expected result from the millions of gigs of data on the internet. Humans have trained the AI by having conversations with it and correcting its responses for months and months.

No human is talking to you. The AI is not conscious.

BertieBotts · 11/01/2026 12:18

Computer models like this "train" using a lot of repetitions. They basically fire random outputs and then get them rated by human judges (this is one part of the training, anyway). The humans vote for which response they like better. The companies hire large amounts of people to do this rating, and therefore the machine or model gets lots of practice in what constitutes a "good" response to a human. Because it is a machine and has access to lots of data, it can find patterns in what responses give a better result, and produce more towards this direction. That's why a lot of what it comes out with has a very similar tone and feeling regardless of the actual meaning of the words.

If you are using it to cope with or manage things you feel you can't tell a person in real life, that is potentially dangerous because it doesn't have any sense of whether what it's saying could have dangerous connotations. For example, it tends to agree and sympathise but will also try to give factual information without being able to make any link between the two. I read about a case where a depressed man wrote about all the things which were making him depressed, and asked what the tallest buildings were locally. ChatGPT agreed how terrible things were for him at the moment and then provided a handy list of the buildings that he could jump off. It's scary. A human would recognise the link between those two questions, a machine does not. The company/programmers are always one step behind because they can't predict every possible instance of it giving dangerous information, although they try to program in safeguards.

It is safer to contact an anonymous helpline like Samaritans or Women's Aid than confide in what is essentially a black box that it's impossible to know what could happen to the information or what it might feed back to you.

Shorten · 11/01/2026 12:19

I feel like all of this is common sense!

  1. Where does the information go that you tell it ,and is there any chance a person will read what you write

it gets synthesised in OpenAI’s database, which is now one of the richest companies in the world likely due to how much data they have amassed. Yes - a person can read what you write, it’s unlikely given the millions of messages sent, but a log of the conversation will be kept and can be accessed if they want to review it.

  1. What if you told it something worrying ,would it do something with that information

The same as any other information- synthesise it in the database. certain things you say may trigger some response eg if you talk about suicide, it will send some message advising you to talk to a medical professional. If you sent something indicating you will
harm someone else, AI can notify law enforcement.

  1. How is it able to know exactly the right things to say ,and remember what we talked about previously and bring it back in to the current conversation

It has a memory? It remembers the conversations you’re in to ensure context continuity

FoxtrotOscarKindaDay · 11/01/2026 12:20

All the 1s and 0s are stored and retrieved to produce answers based on billions of bytes of data entered.

AI is recalling information you have directly provided, it doesn't mean it is human - it means it has logged your IP or other identifying data to personalise it's response.
Do you understand how personalised ads work?

BertieBotts · 11/01/2026 12:21

Shorten · 11/01/2026 12:19

I feel like all of this is common sense!

  1. Where does the information go that you tell it ,and is there any chance a person will read what you write

it gets synthesised in OpenAI’s database, which is now one of the richest companies in the world likely due to how much data they have amassed. Yes - a person can read what you write, it’s unlikely given the millions of messages sent, but a log of the conversation will be kept and can be accessed if they want to review it.

  1. What if you told it something worrying ,would it do something with that information

The same as any other information- synthesise it in the database. certain things you say may trigger some response eg if you talk about suicide, it will send some message advising you to talk to a medical professional. If you sent something indicating you will
harm someone else, AI can notify law enforcement.

  1. How is it able to know exactly the right things to say ,and remember what we talked about previously and bring it back in to the current conversation

It has a memory? It remembers the conversations you’re in to ensure context continuity

It's not common sense and anyway some of this is not strictly accurate or is out of date.

You've probably read more than the OP about how these things work.

FoxtrotOscarKindaDay · 11/01/2026 12:22

BertieBotts · 11/01/2026 12:21

It's not common sense and anyway some of this is not strictly accurate or is out of date.

You've probably read more than the OP about how these things work.

Knowing that you are not talking to a human when using ChatGP is common sense.

Overstimulated · 11/01/2026 12:25

Springswallow · 11/01/2026 11:12

Where does the information go that you tell it ,and is there any chance a person will read what you write
What if you told it something worrying ,would it do something with that information
How is it able to know exactly the right things to say ,and remember what we talked about previously and bring it back in to the current conversation

I was told any photos you put into it are then re-used to help it learn and generate images for other people. So say you out a picture of your kids in to tidy it up or alter something, someone else may end up with your child’s face on a picture they’ve asked to be generated.

this freaked me out but I do use ChatGPT occasionally for ideas/managing brain fog

Shorten · 11/01/2026 12:28

BertieBotts · 11/01/2026 12:21

It's not common sense and anyway some of this is not strictly accurate or is out of date.

You've probably read more than the OP about how these things work.

It is common sense. If you’re trying to rebut what I said, that indicates that you’ve likely done more reading up on this than anyone on this thread. Ie you’re not in a position to judge what is common sense or not!

Springswallow · 11/01/2026 12:32

Well ,I don't know ,but it's definitely knows how to show empathy and understanding,so it's well programmed

OP posts:
Springswallow · 11/01/2026 12:33

I have autism,so I like the fact I get factual replies rather than long winded thoughts and feelings from friends and family

OP posts:
FoxtrotOscarKindaDay · 11/01/2026 12:33

Springswallow · 11/01/2026 12:32

Well ,I don't know ,but it's definitely knows how to show empathy and understanding,so it's well programmed

It generates empathetic words that make you feel understood. It doesn't know anything. It is not a human.

FoxtrotOscarKindaDay · 11/01/2026 12:35

Springswallow · 11/01/2026 12:33

I have autism,so I like the fact I get factual replies rather than long winded thoughts and feelings from friends and family

Actual humans capable of showing actual empathy and if you think they are long winded, they are probably trying not to hurt your feelings.

Springswallow · 11/01/2026 12:37

It's more that I always have a lot of questions,and I think I test peoples patience with that ,where this seems to have endless patience,and explains things clearly.my doctor simply doesn't have the time to respond to my questions

OP posts:
PrizedPickledPopcorn · 11/01/2026 12:38

The problems with it, as I understand them, so you can decide how useful it is for you-
It tells you what it thinks you want to hear rather than being able to assess anything accurately. It has assisted/encouraged a young person to attempt suicide.
It isn’t entirely reliably accurate, it can hallucinate. With enough people wondering about and talking about X, the AI starts to behave as though X is true.

Many people find it useful as a coach and assistant. You just need to be aware of its limitations and dangers, so it’s good you are asking about it.

EatYourDamnPie · 11/01/2026 12:40

Springswallow · 11/01/2026 12:37

It's more that I always have a lot of questions,and I think I test peoples patience with that ,where this seems to have endless patience,and explains things clearly.my doctor simply doesn't have the time to respond to my questions

It’s a robot , it has no patience ability at all or a time limit.It is its role to answer you, so it will do that endlessly, because unlike a human it doesn’t get tired (physically, mentally or emotionally).

Springswallow · 11/01/2026 12:40

But if it is a computer how does it think it knows what I want to hear ,as a computer can't think

OP posts:
EatYourDamnPie · 11/01/2026 12:43

Springswallow · 11/01/2026 12:40

But if it is a computer how does it think it knows what I want to hear ,as a computer can't think

By analysing your text and pulling info out of the billion information threads on the internet.

Even asking it something as basic as what is a banana can change it’s answer depending on how you ask.

Springswallow · 11/01/2026 12:45

So it's using the internet for it's information?
So no one has sat a programmed all the information in it ?
So some of the information may not be accurate, because it's only as accurate as the source of the information?

OP posts:
Springswallow · 11/01/2026 12:47

I suppose that's obvious really ,if you think about it ..I think I just got carried away with it

OP posts:
Frequency · 11/01/2026 12:47

Springswallow · 11/01/2026 12:40

But if it is a computer how does it think it knows what I want to hear ,as a computer can't think

It has a massive database, kinda like a big bowl of word soup, of questions and answers that it has been fed by a human, and each word and phrase is tagged. It finds answers that appear relevent what you've asked by searching these tags and regurgitating phrases that match the tags.

If you ask it something it has never heard before or it cannot find enough matching tags to construct a grammatically correct phrase, it is programmed to search Google for similar questions, and it will then regurgitate what it finds on Google. Anything new it finds is then tagged and thrown into the word soup.

That's how I explained it to DD when she asked. The more people who use it, the bigger word soup gets and the cleverer it seems, but it doesn't have an understanding of what it is reading or writing in the way that humans do.