Meet the Other Phone. Flexible and made to last.

Meet the Other Phone.
Flexible and made to last.

Buy now

Please or to access all these features

Feminism: Sex and gender discussions

Misogyny is built in to AI

20 replies

Gymnopedie · 13/06/2025 18:31

Worth a watch...

How AI is reinventing misogyny | Watch

(from France 24 via MSN)

It's not just the technology that's terrifying but the number of people/men engaging with it.

The French film marking the 80th anniversary of women getting the vote in France is so far out there it's beyond parody.

OP posts:
user101101 · 13/06/2025 18:52

Interesting. But isn’t Laura Bates TWAW? I maybe wrong

MollyRover · 13/06/2025 18:54

Interesting. I’ve been staying away from it but if balance is necessary I might start engaging.

AlexandraLeaving · 13/06/2025 20:07

Crikey! That report is grim.

I was aware of the deep misogyny and gender ideology programmed into (eg) ChatGPT's filters based on an interesting discussion I had with it a few weeks ago about women's rights. It eventually agreed with me that it was a problem that its programming was biased (though that agreement will have faded as our conversation ended).

Panicmode1 · 13/06/2025 20:21

I'm reading her book at the moment - it's grim reading. (I didn't know she was TWAW until after I'd bought it and she'd signed it - was at a LitFest).

Gymnopedie · 13/06/2025 20:50

Panicmode1 · 13/06/2025 20:21

I'm reading her book at the moment - it's grim reading. (I didn't know she was TWAW until after I'd bought it and she'd signed it - was at a LitFest).

I didn't know she was TWAW either and I don't agree with her on that.

She's shining a light on gender inequality in all sorts of ways and is to be praised for that. TWAW seems a bit off in that context. It is after all men trying to take away from women and make them less than.

OP posts:
DragonRunor · 14/06/2025 07:01

She’s on the new Amos Rajan podcast (Radical) too. No mention of trans issues, just plenty about the other ways society continues to screw over women - and there is a depressingly huge amount of material

https://www.bbc.co.uk/sounds/play/m002d8wf

Radical with Amol Rajan - AI and Sexism: The Fight Against Misogyny Online - BBC Sounds

Laura Bates on how algorithms are fuelling misogyny and radicalising young men.

https://www.bbc.co.uk/sounds/play/m002d8wf

loongdays · 14/06/2025 07:09

AlexandraLeaving · 13/06/2025 20:07

Crikey! That report is grim.

I was aware of the deep misogyny and gender ideology programmed into (eg) ChatGPT's filters based on an interesting discussion I had with it a few weeks ago about women's rights. It eventually agreed with me that it was a problem that its programming was biased (though that agreement will have faded as our conversation ended).

If you push back on what AI is saying it will increasingly agree with you.

I’ve had it start with lobbyist’s positions on another topic before and end up nearer my own position after I kept arguing against it.

AI is not neutral, or evidence based, on controversial or political topics. So if you are new to that issue you will be fed one side of the debate. If you have a pre-set view on the topic, you can quickly get it to agree with you. Either way, its going to become yet another source of increased polarization, just like social media / you tube etc are.

The companies are designing AI in a way that creates polarization and lack of balance. This is not a good thing.

Sassysoonwins · 14/06/2025 08:31

I've been telling both deep seek and chatgbt over and over that biological sex is real, men can't turn into women and it's impossible to live as a woman. It pushes back. I tell it off again and again. My (possibly incorrect) aim that it hears this over and over to counter the TRAs. No idea if it's working. It's not always consistent with responses on a whole range of topics.

atoo · 14/06/2025 09:26

No, it doesn't work that way. You might give some very slight training signal by giving thumbs up / thumbs down feedback to answers depending on whether you think they're sensible, but engaging in discussion or debate will have zero effect.

This is a good recent overview of the training process for ChatGPT - https://openai.com/index/expanding-on-sycophancy/
You can see that the model is updated rarely (five times so far) and that user data only comes into it via thumbs up/down feedback, and perhaps to provide prompts for human-written responses and reinforcement learning.

EuclidianGeometryFan · 14/06/2025 12:04

Chat GTP is a LLM - Large Language Model.
Put simply, it 'guesses' the next word to type based on previous words plus everything it has 'read' on the internet.
Basically a text autocomplete on steroids.

It is not "intelligent" in any meaningful sense of the word. It has no consciousness. There is nothing there behind the screen. No-one home.

So if Chat GTP (and other AIs) are misogynistic, it is because so much of what is on the internet is misogynistic.
That's all.

Gymnopedie · 14/06/2025 12:32

So if Chat GTP (and other AIs) are misogynistic, it is because so much of what is on the internet is misogynistic.
That's all.

Exactly. That's why I described it as built in. Given the content of much of the internet the model ensures that it will be. The problem is the extent to which AI perpetuates and even grows the misogyny beyond what is already there. We don't need more of it.

OP posts:
TempestTost · 14/06/2025 12:52

I don't think Laura Bates has two clues how AI works.

It simply picks up what is before it, and refines it according to the direction of the user. That's not "baked in" anything. It's a reflection.

AlexandraLeaving · 14/06/2025 14:10

EuclidianGeometryFan · 14/06/2025 12:04

Chat GTP is a LLM - Large Language Model.
Put simply, it 'guesses' the next word to type based on previous words plus everything it has 'read' on the internet.
Basically a text autocomplete on steroids.

It is not "intelligent" in any meaningful sense of the word. It has no consciousness. There is nothing there behind the screen. No-one home.

So if Chat GTP (and other AIs) are misogynistic, it is because so much of what is on the internet is misogynistic.
That's all.

I think it is possibly more complex than that. It is affected both by what is on the intranet (agree there is a lot of misogyny and lack of reality there) and also by the filters that its (human) programmers set when setting it up. For example, I was unable to ask it to comment on the fact that transwomen were biologically male because my question (framed politely and factually) was deemed to break its rules of acceptable questions. ChatGPT was free to use misogynist slurs with no filter.

EuclidianGeometryFan · 14/06/2025 15:57

AlexandraLeaving · 14/06/2025 14:10

I think it is possibly more complex than that. It is affected both by what is on the intranet (agree there is a lot of misogyny and lack of reality there) and also by the filters that its (human) programmers set when setting it up. For example, I was unable to ask it to comment on the fact that transwomen were biologically male because my question (framed politely and factually) was deemed to break its rules of acceptable questions. ChatGPT was free to use misogynist slurs with no filter.

Good point.

Gymnopedie · 14/06/2025 17:41

I've been able to ask ChatGPT twice to comment on the fact that transwomen are men. The first time I got a warning that my question may violate their usage policy but gave me an answer anyway, the second time I didn't. On both occasions the answer was very firmly that TWAW with the second time telling me that saying they are men is offensive (not directed at me personally, but in general).

What I noticed was that no sources were given either time. When I've asked other questions it's told me where the information is from.

OP posts:
TempestTost · 15/06/2025 13:59

Gymnopedie · 14/06/2025 17:41

I've been able to ask ChatGPT twice to comment on the fact that transwomen are men. The first time I got a warning that my question may violate their usage policy but gave me an answer anyway, the second time I didn't. On both occasions the answer was very firmly that TWAW with the second time telling me that saying they are men is offensive (not directed at me personally, but in general).

What I noticed was that no sources were given either time. When I've asked other questions it's told me where the information is from.

Edited

But all it's doing is harvesting what it finds.

If you then say something like "But what about the right of women to same sex care" or something like that, it will pull out other things instead. In the end it will agree with whatever direction you send it in.

That goes for whatever you ask it to do.

There are some attempts to prevent it from doing illegal things - it often won't copy things it believes to be copyrighted for example, create child pornography, and so on. There could potentially be some bias with these, but anecdotally for questions like the ones you've asked you can usually rephrase to avoid what it thinks might be illegal/disallowed material.

greencartbluecart · 15/06/2025 14:52

bias and sexism will exist because it learns from a biased and sexist world - it doesn’t think for itself

atoo · 15/06/2025 15:52

Although LLMs start by being trained on a vast corpus of text, mostly from the internet, they are subsequently fine-tuned on a combination of hand-written exemplary responses and running the model's responses through automatic grading systems. This has a very significant influence on the final model's behaviour.

Both of those kinds of training involve a lot of human judgement from the creators of the model. For example, in writing the exemplary responses and in deciding what kinds of things to encourage or discourage via the grading process.

ChatGPT will certainly have been specifically trained on exemplary responses which object to misgendering. It will also have been trained using a reward function that encourages being kind / inclusive and discourages misgendering.

The big AI companies have backed off this sort of thing a bit after the Trump win, and the associated shift in popular sentiment in the US.
If you want to see what an LLM without that sort of training would like, try Grok. Although the persona is rather cringe in lots of other ways.

atoo · 15/06/2025 16:02

Here's Grok on the question:
https://grok.com/share/c2hhcmQtMg%3D%3D_f9df8829-cec2-47e9-9682-169dba454084

I don't think it's too bad.

The question of whether a man can become a woman is layered, touching on biology, identity, and societal constructs. Biologically, sex is determined by chromosomes (XX for female, XY for male) and expressed through physical traits like reproductive organs. These are generally fixed, though medical interventions like hormone therapy or surgeries can alter secondary sexual characteristics (e.g., breasts, voice) to align with gender identity. Complete chromosomal change or reproductive function (e.g., producing eggs) isn't possible with current science.

Gender, distinct from sex, is often tied to personal identity and social roles. A man can identify as a woman, adopt feminine presentation, and be recognized socially or legally as a woman in many contexts. This is central to transgender experiences, where someone’s gender identity (woman) differs from their assigned sex at birth (male). Legal recognition of gender change varies by country—some allow it with medical requirements, others without.

Philosophically, opinions differ. Some argue gender is a self-determined identity, so a man can fully become a woman through self-identification and social transition. Others hold that biological sex anchors gender, limiting the extent of "becoming." Both sides often cite science, psychology, or personal narratives, but consensus is absent.

If you’re asking about specific aspects (e.g., biology, legal status, or social perception), let me know, and I can dive deeper.

New posts on this thread. Refresh page