Meet the Other Phone. Only the apps you allow.

Meet the Other Phone.
Only the apps you allow.

Buy now

Please or to access all these features

Site stuff

Join our Innovation Panel to try new features early and help make Mumsnet better.

See all MNHQ comments on this thread

Posts written by Chat GPT and other AI

194 replies

CruCru · 13/02/2026 18:12

I keep seeing posts written by Chat GPT on various threads. Sometimes the poster says something along the lines of “I got AI to summarise what I wanted to say and here it is” and sometimes they post with no reference to AI (but it is fairly obvious).

Please can this stop? This site will become unusable if the posts get written by robots. MN is populated by articulate, well educated women; we don’t need to use AI to communicate with each other. It isn’t allowed in school.

I suggest that obvious AI posts should be reported for deletion.

OP posts:
Thread gallery
6
GlosGirl82 · 26/03/2026 08:52

I like MN because it’s full of real people and not posting performative stuff like on Facebook - the real comments and thoughts are why I am here

NoraLuka · 26/03/2026 08:53

Lukilols · 26/03/2026 08:49

@NoraLuka Anthropic aren’t much better if at all. There was a class action against them by a bunch of authors because they ripped off their works.

https://www.bbc.co.uk/news/articles/c5y4jpg922qo

Yes I remembered that just after posting! None of them really care about anything apart from their profits, ATM Anthropic have something to gain by being ‘nice’ compared to OpenAI, that’s all it is.

Lukilols · 26/03/2026 09:05

NoraLuka · 26/03/2026 08:53

Yes I remembered that just after posting! None of them really care about anything apart from their profits, ATM Anthropic have something to gain by being ‘nice’ compared to OpenAI, that’s all it is.

Yeah it’s just clever branding unfortunately !

The tech companies who for so long touted themselves as for the good of humanity and the like have now been shown to be just as, if not more morally bankrupt and harmful than some of the major finance companies, the arms industry, corrupt/evil politicians.

And yet people are surprised when so called morally neutral tech is found to be racist, discriminatory etc https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future

dizzydizzydizzy · 26/03/2026 09:13

It doesn’t bother me. I’m far more bothered by people replying sarcastically, telling me I am stupid or naive or leaping to very negative conclusions about what I have said. There was one thread in particular where numerous people piled onto a comment I made and claimed I meant X and thought Y - neither was true but when I tried to further explain that they had misunderstood even more people piled onto saying I was backtracking and how could I possibly think anyone was going to believe me etc. It wad bizarre and I stayed off MN for a couple of months due to the nastiness.

Lukilols · 26/03/2026 09:23

GlosGirl82 · 26/03/2026 08:52

I like MN because it’s full of real people and not posting performative stuff like on Facebook - the real comments and thoughts are why I am here

Same!

But the unfortunate reality is we are now living in an age where people are happy to have ChatGPT as “friends” and “wives” and “boyfriends” etc (as long as IT keeps saying the nice things to them) so it’s no surprise some people are okay with the use of A.I. on MN.

As a pp said - we’re losing our communication skills. And we’re not just losing them - we are actively giving them up by engaging so much with this A.I. BS.

GeniusofShakespeare · 26/03/2026 10:27

Sporkmaiden · 22/03/2026 19:53

I disagree that they’re saying the same thing.

In the AI version the son is “flat-out refusing” and “won’t engage at all”, which presents him as being actively defiant. This sounds like a clear behavioural issue and something that calls for more discipline and/or harsher consequences.
“School have said” suggests mum has been in contact with school, and it’s got to a point where they’ve threatened to suspend her son if he doesn’t “comply”. Ending on the uniform issue and the fact that her son “won’t budge”, again reiterates that the main problem is her child’s defiance, and I’d probably suggest she remove devices and any other privileges until he falls into line.
Dad’s anger is presented as being about the situation, and is a problem because she’s stressed about having to mediate between her husband and son - I’d likely suggest Dad being annoyed isn’t a bad thing, as she seems quite passive and needs to find a backbone to be able to deal with this.

In the first/second versions the son “doesn’t like to wear his school uniform”, and I’d wonder if there might be ND or sensory issues going on that could be solved with some adjustments. As there’s no mention of having any contact with school, it’s possible this is something she’s worried about rather than a threat school has actually communicated: I’d ask about what her son doesn’t like about the uniform, and would advise her to reach out to the school for support in handling this.
Much of mum’s concern around this is focused on Dad’s anger, and she’s specifically concerned about him getting angry at her son. I’d be aware of the possibility of domestic abuse, would ask about whether there’d been other times when he’d got angry at her son and what happens when he does, and would try to check that she and her son are safe.

The AI version has added information that wasn’t in the original, changed the focus of the concerns (because AI can’t read between the lines or pick up on nuance), and smoothed everything out to make it more palatable. Her anxiety is pretty much erased, her son is being a defiant little shit, and she’s being overly passive while husband’s reactions seem totally normal.
At best she’d get replies that don’t fully relate to her situation. At worst, a woman in a potentially abusive situation could find herself being given very, very bad advice.

This is why I have a problem with AI being given the basic information then having it write on behalf of people. It prioritises the ‘flow’ of the writing more than the content, adds plausible sounding information that looks right but might not be accurate, and makes everything calm and detached, removing little details that posters could pick up on and use to inform how they reply.

I love this post. I'd also argue that the errors in the first version communicate something in themselves about the person writing it, which is smoothed out in versions 2 and 3.

There is a great Zadie Smith Essay Generation Why? | Zadie Smith | The New York Review of Books (it's also in her book Feel Free) about social media and the film The Social Network. In it she discusses Jaron Lanier's idea that people "reduce themselves" in order to fit with social media's reductive presentation of them:

"'Information systems need to have information in order to run, but information underrepresents reality'...life is turned into a database, and this is a degradation, Lanier argues, which 'is based on [a] philosophical mistake...the belief that computers can presently represent human thought or human relationships'...We know that we are using the software to behave in a certain, superficial way towards others. We know what we are doing 'in' the software. But do we know, are we alert to, what the software is doing to us? Is it possible that what is communicated between people online 'eventually becomes their truth'? What Lanier, a software expert, reveals to me, a software idiot, is what must be obvious (to software experts): software is not neutral. Different software embeds different philosophies, and these philosophies, as they become ubiquitous, become invisible."

This was written years before ChatGPT and I think the point it makes applies even more strongly now- AI doesn't just reduce what we want to express (and as a result arguably how we perceive ourselves to be)- it replaces it with its own suggestions, which are based not on any knowledge or understanding of us but what it has calculated is the most likely thing an average person might want to say.

Like lots of people, I use AI occasionally and we are strongly encouraged in my workplace to use it more and more. I swing between thinking it's a useful tool which people are currently over-using (for example, attempting to convey complex thoughts and feelings) and thinking it's the first step onto the slippery slope that leads to the end of everything of value. With that in mind, I'd far rather read a post with a few spelling mistakes.

Generation Why? | Zadie Smith

How long is a generation these days? I must be in Mark Zuckerberg's generation—there are only nine years between us—but somehow it doesn't feel that way. This despite the fact that I can say (like everyone else on Harvard's campus in the fall of 2003)...

https://www.nybooks.com/articles/2010/11/25/generation-why/

ThatPearlkitty · 26/03/2026 13:37

GeniusofShakespeare · 26/03/2026 10:27

I love this post. I'd also argue that the errors in the first version communicate something in themselves about the person writing it, which is smoothed out in versions 2 and 3.

There is a great Zadie Smith Essay Generation Why? | Zadie Smith | The New York Review of Books (it's also in her book Feel Free) about social media and the film The Social Network. In it she discusses Jaron Lanier's idea that people "reduce themselves" in order to fit with social media's reductive presentation of them:

"'Information systems need to have information in order to run, but information underrepresents reality'...life is turned into a database, and this is a degradation, Lanier argues, which 'is based on [a] philosophical mistake...the belief that computers can presently represent human thought or human relationships'...We know that we are using the software to behave in a certain, superficial way towards others. We know what we are doing 'in' the software. But do we know, are we alert to, what the software is doing to us? Is it possible that what is communicated between people online 'eventually becomes their truth'? What Lanier, a software expert, reveals to me, a software idiot, is what must be obvious (to software experts): software is not neutral. Different software embeds different philosophies, and these philosophies, as they become ubiquitous, become invisible."

This was written years before ChatGPT and I think the point it makes applies even more strongly now- AI doesn't just reduce what we want to express (and as a result arguably how we perceive ourselves to be)- it replaces it with its own suggestions, which are based not on any knowledge or understanding of us but what it has calculated is the most likely thing an average person might want to say.

Like lots of people, I use AI occasionally and we are strongly encouraged in my workplace to use it more and more. I swing between thinking it's a useful tool which people are currently over-using (for example, attempting to convey complex thoughts and feelings) and thinking it's the first step onto the slippery slope that leads to the end of everything of value. With that in mind, I'd far rather read a post with a few spelling mistakes.

but then it depends on how you use the prompts on ai, sometimes you can say eg say x excatly as this but word it better or you could say eg write a response or summary and add extra details etc

ThatPearlkitty · 26/03/2026 13:40

EmpressaurusKitty · 26/03/2026 08:18

Why are you arguing in favour of posters using AI instead of writing for themselves, @ThatPearlkitty?

because some people know what they want to write but struggle to word it

NoraLuka · 26/03/2026 13:53

ThatPearlkitty · 26/03/2026 13:37

but then it depends on how you use the prompts on ai, sometimes you can say eg say x excatly as this but word it better or you could say eg write a response or summary and add extra details etc

But doesn’t the way it works mean that it chooses the ‘most likely’ next word? It obviously doesn’t understand what it’s saying, it’s just comparing all the sentences it’s been trained on. Talking about LLMs here not other types of AI. How can it work like that and not automatically make things more average?

ThatPearlkitty · 26/03/2026 14:06

NoraLuka · 26/03/2026 13:53

But doesn’t the way it works mean that it chooses the ‘most likely’ next word? It obviously doesn’t understand what it’s saying, it’s just comparing all the sentences it’s been trained on. Talking about LLMs here not other types of AI. How can it work like that and not automatically make things more average?

from when ive used it it all depends on what i ask of it for example ;

i say its nice and sunny for april (word this better)

it responds with :

Other nice variations (depending on the vibe you want):

  • Warm & appreciative: "What a lovely sunny day we're having for April!"
  • Slightly more poetic: "It's wonderfully sunny and bright for an April day."
  • Casual & cheerful: "It's such a nice sunny day out for April!"
  • Emphasizing the surprise/pleasantness: "It's beautifully sunny for April!" or "We're getting some gorgeous sunshine for April."

or if i ask it to
its nice and sunny for april ( write a response or summary and add extra details etc)

and it writes :

Suggested Response / Post:
"It's such a beautiful, sunny day for April here in Ashburn! The sky is a perfect clear blue, the sun feels warm on your face, and there's that fresh spring energy in the air. After the chillier, cloudier days we've had earlier in the month, this kind of bright weather is especially welcome—it really lifts the mood and makes everything feel more vibrant."

So to me its all about how you word the prompts

NoraLuka · 26/03/2026 14:23

ThatPearlkitty · 26/03/2026 14:06

from when ive used it it all depends on what i ask of it for example ;

i say its nice and sunny for april (word this better)

it responds with :

Other nice variations (depending on the vibe you want):

  • Warm & appreciative: "What a lovely sunny day we're having for April!"
  • Slightly more poetic: "It's wonderfully sunny and bright for an April day."
  • Casual & cheerful: "It's such a nice sunny day out for April!"
  • Emphasizing the surprise/pleasantness: "It's beautifully sunny for April!" or "We're getting some gorgeous sunshine for April."

or if i ask it to
its nice and sunny for april ( write a response or summary and add extra details etc)

and it writes :

Suggested Response / Post:
"It's such a beautiful, sunny day for April here in Ashburn! The sky is a perfect clear blue, the sun feels warm on your face, and there's that fresh spring energy in the air. After the chillier, cloudier days we've had earlier in the month, this kind of bright weather is especially welcome—it really lifts the mood and makes everything feel more vibrant."

So to me its all about how you word the prompts

Edited

I don’t mean the prompts themselves though, I’m talking about the next step and how it turns prompts into sentences. I know next to nothing about how it does this but I as I understand it, it takes its training data and calculates the most likely words it should use. So basically the ‘average’ words.

ThatPearlkitty · 26/03/2026 14:30

NoraLuka · 26/03/2026 14:23

I don’t mean the prompts themselves though, I’m talking about the next step and how it turns prompts into sentences. I know next to nothing about how it does this but I as I understand it, it takes its training data and calculates the most likely words it should use. So basically the ‘average’ words.

You’re on the right track, but it’s a bit more nuanced than just averaging words. After receiving a prompt, the model converts the text into numerical representations called tokens, which capture both meaning and context.

It then passes these tokens through a deep neural network made up of layers of transformer blocks.

Each block applies attention mechanisms, which allow the model to weigh the importance of every token in the context relative to all the others.

At each step, the model calculates a probability distribution over the entire vocabulary for the next token, not just the “average” word it’s choosing from words that best fit the context according to patterns learned during training.

This process repeats token by token until it forms complete sentences, balancing coherence, grammar, and the style requested.

ThatPearlkitty · 26/03/2026 14:32

NoraLuka · 26/03/2026 14:23

I don’t mean the prompts themselves though, I’m talking about the next step and how it turns prompts into sentences. I know next to nothing about how it does this but I as I understand it, it takes its training data and calculates the most likely words it should use. So basically the ‘average’ words.

Think of it like a puzzle where you’re trying to fill in the next piece. You see the pieces already on the board the words in the prompt and, based on patterns from thousands of completed puzzles its studied, you pick the piece that fits best. You place it, then look at the new board, and pick the next piece that fits, repeating this until the whole picture the full sentence or paragraph is complete.

CruCru · 26/03/2026 14:41

But when I read something produced by AI my heart sinks. There’s an enormously long post on a thread I’m on which came straight out of AI - I didn’t even read it. Other people on the thread felt the same way, judging by the comments. It’s a huge turn off.

If MN becomes a place where the final output from AI gets plugged into posts, I won’t bother reading it and posting.

OP posts:
ThatPearlkitty · 26/03/2026 14:48

This reply has been deleted

Message deleted by MNHQ. Here's a link to our Talk Guidelines.

GeniusofShakespeare · 26/03/2026 14:53

ThatPearlkitty · 26/03/2026 14:30

You’re on the right track, but it’s a bit more nuanced than just averaging words. After receiving a prompt, the model converts the text into numerical representations called tokens, which capture both meaning and context.

It then passes these tokens through a deep neural network made up of layers of transformer blocks.

Each block applies attention mechanisms, which allow the model to weigh the importance of every token in the context relative to all the others.

At each step, the model calculates a probability distribution over the entire vocabulary for the next token, not just the “average” word it’s choosing from words that best fit the context according to patterns learned during training.

This process repeats token by token until it forms complete sentences, balancing coherence, grammar, and the style requested.

Certainly you can get a better output with a better prompt, but the effect is still to smooth out the communicated meaning, to lose the bits that are human and difficult and unpredictable. I can just about see an argument for this in communications where sticking to a formulaic way of expressing yourself might be desirable, such as a professional email or letter fromyour bank etc. I find it at best unnerving and at worst repellent in more personal writing- a sort of uncanny valley of communication.

As AI improves I dare say that it will be less easy to spot but I don't think that makes any of this less worrying. The mental process of formulating your own thoughts into words is part of the process of thinking those thoughts- we work out exactly what it is we want to say as part of the process of saying it. AI won't just smooth out what we say- in doing that it will smooth out what and how we think, leaving less room for genuine creativity, originality and complexity.

CruCru · 26/03/2026 14:55

I read the suggested post you put up about the lovely, sunny day in April. It had no SPAG errors, nothing to jump all over by pedants. But it also lacks the spark of humanity - the stuff that makes MN funnier, weirder and ruder than the stuff that comes off AI.

OP posts:
PersonalJaysus · 26/03/2026 14:56

Not trolling.
Not lazy.
Not Mumsnet _ Just intentional. Defensible. Authentic.

Next, would you like me to investigate this says to write like a human? Just say the word!

CruCru · 26/03/2026 14:58

PersonalJaysus · 26/03/2026 14:56

Not trolling.
Not lazy.
Not Mumsnet _ Just intentional. Defensible. Authentic.

Next, would you like me to investigate this says to write like a human? Just say the word!

I’m sorry, I’m not sure that I follow?

OP posts:
GeniusofShakespeare · 26/03/2026 14:59

GeniusofShakespeare · 26/03/2026 14:53

Certainly you can get a better output with a better prompt, but the effect is still to smooth out the communicated meaning, to lose the bits that are human and difficult and unpredictable. I can just about see an argument for this in communications where sticking to a formulaic way of expressing yourself might be desirable, such as a professional email or letter fromyour bank etc. I find it at best unnerving and at worst repellent in more personal writing- a sort of uncanny valley of communication.

As AI improves I dare say that it will be less easy to spot but I don't think that makes any of this less worrying. The mental process of formulating your own thoughts into words is part of the process of thinking those thoughts- we work out exactly what it is we want to say as part of the process of saying it. AI won't just smooth out what we say- in doing that it will smooth out what and how we think, leaving less room for genuine creativity, originality and complexity.

(BTW I'm fairly sure the post I've quoted above was written by AI- it's certainly wildly different from your style earlier in the thread and opens with "You're on the right track..." which is an absolute AI banger 😂)

CruCru · 26/03/2026 15:12

It’s a bit off topic but my children have introduced me to this game where people can LARP as an AI - other people put the sort of random questions people put into AI and the point is to answer them as best you can.

https://youraislopbores.me

OP posts:
CruCru · 26/03/2026 15:28

Am not sure why the PP’s post at 14:48 was deleted. I don’t remember it being unpleasant.

OP posts:
SixSevenShutUp · 26/03/2026 15:56

PersonalJaysus · 26/03/2026 14:56

Not trolling.
Not lazy.
Not Mumsnet _ Just intentional. Defensible. Authentic.

Next, would you like me to investigate this says to write like a human? Just say the word!

Argh! The list of negatives. Does any human write like that? I read a lot of fanfiction and as soon as I spot the not x, not y but z construction I close the tab. Such a weird way of thinking.

Lukilols · 26/03/2026 20:07

SixSevenShutUp · 26/03/2026 15:56

Argh! The list of negatives. Does any human write like that? I read a lot of fanfiction and as soon as I spot the not x, not y but z construction I close the tab. Such a weird way of thinking.

Peope are using A.I. for fan fiction? What’s the point?

Ffs! 😂

Surely people have always written fan fiction because they enjoy it - so why are they turning to AI to write it!

SixSevenShutUp · 26/03/2026 20:18

Lukilols · 26/03/2026 20:07

Peope are using A.I. for fan fiction? What’s the point?

Ffs! 😂

Surely people have always written fan fiction because they enjoy it - so why are they turning to AI to write it!

Yes, I used to write it myself, probably very badly, but it was such fun. I love being part of a community and AI just takes away that feeling.

Swipe left for the next trending thread