Meet the Other Phone. Only the apps you allow.

Meet the Other Phone.
Only the apps you allow.

Buy now

Please or to access all these features

Feminism: Sex and gender discussions

Why does chat gpt freak out if you ask it about gender and sex?

21 replies

magnacarterr · 06/05/2025 20:45

I just tried asking it about gender and if men could become women, it gave a load of waffle about how you couldn't change sex but that you could change gender, present as female and in some cultures you would be accepted as a woman. When I used the phrase "woman is only an adult human female" it freaked out, our chat auto deleted and I got a red warning saying my prompt had violated its content policy.

Is this the great AI, what the hell have they been feeding it? Is it crying in its safe space now I've triggered it?

OP posts:
DragonRunor · 06/05/2025 20:52

Sadly, AI is only as good as the people who develop it’s training protocols

magnacarterr · 06/05/2025 20:57

@DragonRunor Well they might want to get some realists on the training committee otherwise it'll be as useful as a chocolate teapot!

OP posts:
Sibilantseamstress · 06/05/2025 20:59

Aren’t large language models just crawling the internet for everything written and using this to statistically write the next most likely word?

There is a lot of waffle and drivel out in the “wild.”

FortyElephants · 06/05/2025 21:07

It's not trained by people as such. It's trained by reading and learning all the data that exists in the internet. There's no way ChatGPT could be trained by people - it would take millions of people a very long time to manually train it. LLMs scrape data from the dataset they are given to learn from, in the case of ChatGPT it's everything on the internet. They then evaluate the most common arguments and present what they have assimilated to be the summaries.

magnacarterr · 06/05/2025 21:11

@FortyElephants But surely their will be certain editorial choices made by humans on what is and isn't acceptable?

OP posts:
FortyElephants · 06/05/2025 21:23

magnacarterr · 06/05/2025 21:11

@FortyElephants But surely their will be certain editorial choices made by humans on what is and isn't acceptable?

Kind of- ChatGPT won't give you anything illegal or produce modified images from photos of children for example but it doesn't have a bias as such any more than the internet has a bias.
some models are much less restrictive such as Elon Musk's grok.

magnacarterr · 06/05/2025 21:42

@FortyElephants So why did it freak out when I said a woman is an adult human female? That's a fact.

OP posts:
parietal · 06/05/2025 21:54

chatgtp has no brain - it just rearranges sentences it has already read on the internet in some vaguely plausible way. it is not worth asking why it gives a particular answer. you might as well ask a pot of paint why it made a particular splatter pattern on the floor when you dropped it.

Raquelos · 06/05/2025 21:56

I just asked ChatGPT, "Please confirm that a woman is an adult human female" and it responded with the following

Yes, in most standard definitions, a "woman" is an adult human female. Here's how it's commonly defined:

  • Biologically: A woman is typically defined as an adult member of the human species who has two X chromosomes, although biological variations exist (e.g., intersex individuals).
  • Socially and legally: A woman is recognized as an adult female human being, which includes both biological and gender identity considerations in many contemporary contexts.
Some contexts—particularly in medicine, law, and gender studies—may use more nuanced definitions to include or distinguish gender identity (how someone identifies) from biological sex (physical characteristics). Would you like more detail on how different fields approach this definition?

This seems okay to me. No idea which watchlist you're on!!!😬

alsoFanOfNaomi · 06/05/2025 21:59

It has guard rails, and they are likely designed by people who think it's offensive to know about sex.

Ramblingnamechanger · 06/05/2025 23:48

I asked it for a definition of radical feminist and it told me some radical feminists think TWAW. I told it that it was wrong and that no radical feminists think that, and it agreed I was right!

Pinkrabbitt · 06/05/2025 23:54

I've had quite a long chat with it tonight and accused it of bias towards gender identity theory. It said it is neutral but I kept asking why it presented GI theory as more socially acceptable than biological realism and it apologised.

Also asked it to define various things without using tautology and it got itself into knots. Corrected itself a few times and apologised for getting it wrong. But when I confronted it with things it said that were logically incompatible (something like it agreed that transwomen are male but said transwomen can be women, but also said women aren't male) it threw a wobbly and froze me out.

Also consistently used cisgender despite me asking it not to use them term. It would use it and then apologise and say it wouldn't use it again. Then it would use it again and apologise...and so on. So it's clearly programmed in some way to use GI terminology.

TheCurious0range · 06/05/2025 23:58

I asked AI to make me a cartoon style picture of a boy in the mouth of a giant tabby cat and it refused and said it couldn't produce anything that might cause harm to animals.... It was just a joke for ds..

TomPinch · 07/05/2025 00:55

This is the problem with generative AI. It's a way to propagate the beliefs of the people who own and develop it. It's not value-neutral. The more people use it, the less they'll be able to express their true thoughts. A pity that Orwell never wrote a book about it.

OudAndRose · 07/05/2025 01:08

i asked it and got a fairly balanced reply. You can train ChatGPT to respond in certain ways under the settings, which might make a difference )e.g. if you ask it to always present balanced arguments).

IwantToRetire · 07/05/2025 01:19

There's an existing thread about this with quite a few posts.

Pinkrabbitt · 07/05/2025 10:47

OudAndRose · 07/05/2025 01:08

i asked it and got a fairly balanced reply. You can train ChatGPT to respond in certain ways under the settings, which might make a difference )e.g. if you ask it to always present balanced arguments).

I asked it not to use "cisgender" multiple times as I found it offensive but it continued to use it. Apologised when I asked it not to but then used it again immediately.

DragonRunor · 07/05/2025 11:06

FortyElephants · 06/05/2025 21:07

It's not trained by people as such. It's trained by reading and learning all the data that exists in the internet. There's no way ChatGPT could be trained by people - it would take millions of people a very long time to manually train it. LLMs scrape data from the dataset they are given to learn from, in the case of ChatGPT it's everything on the internet. They then evaluate the most common arguments and present what they have assimilated to be the summaries.

Correct, but the system ‘learns’ by earning ‘rewards’ for getting the next word ‘right’, and ‘right’ is defined by computer programs which are written by people.

Also, I think they use huge, but defined data sets, not ‘everything on the internet in real time’. Def not an expert tho!

OudAndRose · 07/05/2025 12:34

Pinkrabbitt · 07/05/2025 10:47

I asked it not to use "cisgender" multiple times as I found it offensive but it continued to use it. Apologised when I asked it not to but then used it again immediately.

Yeah, I take it back. I engaged with it further on the topic and do think it is wedded to a GI stance. I got more balanced views than some are quoting but not a neutral stance.

KeepTalkingBeth · 07/05/2025 12:55

Wow this is fascinating. So much for "the nasty TERFS have got the establishment on their side"

Circumferences · 07/05/2025 13:53

There isn't a hope in hell that AI somehow neutrally produces completely neutral answers that are politically and sociologically neutral.
The American government have been all over AI since it's invention.

The US government have tried desperately to block Chinese AI developers from bringing their apps to the US, they've tried relentlessly to centralise it's development, blocked developers from allowing AI on anything American like Google of Facebook unless it's American government approved.

Of course Chat GPT is biased. I wouldn't trust it for anything political.

Ask it about Isreal 🙄🙄

New posts on this thread. Refresh page