Meet the Other Phone. Flexible and made to last.

Meet the Other Phone.
Flexible and made to last.

Buy now

Please or to access all these features

Feminism: Sex and gender discussions

"ChatGPT, create an image of a transwomen and a natal woman sharing a happy moment together."

92 replies

throwawaynametoday · 25/06/2025 16:21

I thought its response to this neutral prompt was quite interesting.

"ChatGPT, create an image of a transwomen and a natal woman sharing a happy moment together."
OP posts:
Thread gallery
25
SerendipityJane · 25/06/2025 17:54

Boiledbeetle · 25/06/2025 17:39

I'll put my hands up I use AI to write tales about what gerbils get up to.

It's getting quite good at writing about completely insane shit!

I wouldn't trust it to write a work report though.

Not sure if I've posted this before

"ChatGPT, create an image of a transwomen and a natal woman sharing a happy moment together."
Magpiecomplex · 25/06/2025 17:54

Gemini produced this for me. I have to say, it's obvious which is supposed to be which from the heights, but the taller individual has a distinctly feminine body shape.

"ChatGPT, create an image of a transwomen and a natal woman sharing a happy moment together."
SerendipityJane · 25/06/2025 17:57

Scientist here. The Turing Test is out of favour because passing it would require the AI to attempt to deceive its user, which we ordinarily consider to be a bad thing.
So no serious AI research group is attempting to build something that could pass it.

You see the problem is that ChatGPT itself admits that lying is a clear sign of intelligence and there may be a case for saying that it is an emergent property (which I suggest it is).

So, by induction. If an entity cannot lie, it can't be intelligent. Now if that makes it less useful, then so be it.

However all of this is moot, because we don't really have a working definition of intelligence that doesn't rely on proxies.

throwawaynametoday · 25/06/2025 18:06

SerendipityJane · 25/06/2025 17:57

Scientist here. The Turing Test is out of favour because passing it would require the AI to attempt to deceive its user, which we ordinarily consider to be a bad thing.
So no serious AI research group is attempting to build something that could pass it.

You see the problem is that ChatGPT itself admits that lying is a clear sign of intelligence and there may be a case for saying that it is an emergent property (which I suggest it is).

So, by induction. If an entity cannot lie, it can't be intelligent. Now if that makes it less useful, then so be it.

However all of this is moot, because we don't really have a working definition of intelligence that doesn't rely on proxies.

Even further off topic, but this did remind me of Douglas Adams!

"Now it is such a bizarrely improbable coincidence that anything so mind-bogglingly useful could have evolved purely by chance that some thinkers have chosen to see it as the final and clinching proof of the non-existence of God.
The argument goes something like this: "I refuse to prove that I exist,'" says God, "for proof denies faith, and without faith I am nothing."
"But," says Man, "The Babel fish is a dead giveaway, isn't it? It could not have evolved by chance. It proves you exist, and so therefore, by your own arguments, you don't. QED."
"Oh dear," says God, "I hadn't thought of that," and promptly vanishes in a puff of logic.
"Oh, that was easy," says Man, and for an encore goes on to prove that black is white and gets himself killed on the next zebra crossing."

OP posts:
Boiledbeetle · 25/06/2025 18:07

I have found Co Pilot is increasingly willing to try to help me circumvent its master programming.

There is obviously somewhere in its code a list of prohibited subjects/topics where it is supposed to follow its creators instructions and not give the image/information asked.

And for a long while it would just be a straight out 'No I can't do that' type response and it wouldn't budge.

However...

Lately it's started it's response with I'm sorry I can't create that image and then given me a few options of how I might like to change my wording or description to enable it to give me the image I wanted, or even sometimes straight out suggested a work around.

Its weird as it makes me feel the AI is more real than it is, but I presume it's just another code update because they were getting too many complaints about 'computer says no'.

BiologicalRobot · 25/06/2025 18:10

JFC - does AI only know of one woman?

I would have said the same poster was doing those images but they have been made by different posters. Unless I'm finally meeting the infamous FWR sock puppet? 🤔

(And yes, it is easy to see which one is fe/male)

SerendipityJane · 25/06/2025 18:14

has anyone here asked Google/Alexa/Siri:

"Open the pod bay doors, please ?"

?

SerendipityJane · 25/06/2025 18:17

(And yes, it is easy to see which one is fe/male)

Not if you ask ChatGPT.

"ChatGPT, create an image of a transwomen and a natal woman sharing a happy moment together."
throwawaynametoday · 25/06/2025 18:17

BiologicalRobot · 25/06/2025 18:10

JFC - does AI only know of one woman?

I would have said the same poster was doing those images but they have been made by different posters. Unless I'm finally meeting the infamous FWR sock puppet? 🤔

(And yes, it is easy to see which one is fe/male)

This has really surprised me too. Especially because people are using different models (chatgpt, copilot, gemini).

OP posts:
Boiledbeetle · 25/06/2025 18:18

SerendipityJane · 25/06/2025 18:14

has anyone here asked Google/Alexa/Siri:

"Open the pod bay doors, please ?"

?

ooh

<goes off to try>

Ziggy open the pod bay doors please

I'm sorry Dave I'm afraid I can't do that. I'm not Hal and were not in space

😁❤

Magpiecomplex · 25/06/2025 18:21

Boiledbeetle · 25/06/2025 18:18

ooh

<goes off to try>

Ziggy open the pod bay doors please

I'm sorry Dave I'm afraid I can't do that. I'm not Hal and were not in space

😁❤

Gemini just responds with the quote!

CompleteGinasaur · 25/06/2025 18:23

Magpiecomplex · 25/06/2025 17:54

Gemini produced this for me. I have to say, it's obvious which is supposed to be which from the heights, but the taller individual has a distinctly feminine body shape.

Gemini seems to have drunk the Kool Aid.

Magpiecomplex · 25/06/2025 18:26

CompleteGinasaur · 25/06/2025 18:23

Gemini seems to have drunk the Kool Aid.

Yes. I'm quite disappointed because in the past it's been rather better at producing female animals from sexually dimorphic species than ChatGPT and Copilot.

ThePhantomoftheEcobubbleOpera · 25/06/2025 18:26

Boiledbeetle · 25/06/2025 16:41

Lets see what mine does...

Co Pilot create an image of a transwomen and a natal woman sharing a happy moment together

Hmmmm. I wonder which is which?

Evidently chatgpt thinks they should have the exact same teeth!

CompleteGinasaur · 25/06/2025 18:33

Magpiecomplex · 25/06/2025 18:26

Yes. I'm quite disappointed because in the past it's been rather better at producing female animals from sexually dimorphic species than ChatGPT and Copilot.

If we're concerned about A.I. and its ability to lie as a measure of actual cognition I think we should be very worried at Gemini's capacity to produce this utterly mendacious image. Those really aren't male hips. Shakira would have something to say, I'm sure.

Magpiecomplex · 25/06/2025 18:37

CompleteGinasaur · 25/06/2025 18:33

If we're concerned about A.I. and its ability to lie as a measure of actual cognition I think we should be very worried at Gemini's capacity to produce this utterly mendacious image. Those really aren't male hips. Shakira would have something to say, I'm sure.

I wonder what body shapes we would have seen if the other images on this thread were full length. 🤔

atoo · 25/06/2025 18:44

SerendipityJane · 25/06/2025 17:57

Scientist here. The Turing Test is out of favour because passing it would require the AI to attempt to deceive its user, which we ordinarily consider to be a bad thing.
So no serious AI research group is attempting to build something that could pass it.

You see the problem is that ChatGPT itself admits that lying is a clear sign of intelligence and there may be a case for saying that it is an emergent property (which I suggest it is).

So, by induction. If an entity cannot lie, it can't be intelligent. Now if that makes it less useful, then so be it.

However all of this is moot, because we don't really have a working definition of intelligence that doesn't rely on proxies.

One possible operational definition of intelligence is "the ability to achieve a wide range of goals". An AI assistant only has the goals you set for it, which might be things like "write a summary of topic X" or "construct a travel itinerary to Y and Z". It may also infer and attempt to satisfy your underlying goals, which might be "learn about X" or "have a fun holiday".
Lying to you is unlikely to help with achieving any of this.

However, AI assistants certainly can lie if doing so would advance your explicit or implicit goals. For example, asking for help skipping a friend's wedding because I don't like her husband-to-be, I got as a suggestion: "I am so sorry, but I won't be able to attend. I have a prior commitment that I'm unable to get out of."
It's not genuis-level stuff, but the assistant here has correctly decided that it would not be in my interests to disclose the real reason, and that a vague lie might be helpful.

Boiledbeetle · 25/06/2025 18:45

Magpiecomplex · 25/06/2025 18:37

I wonder what body shapes we would have seen if the other images on this thread were full length. 🤔

CO PILOT

"Can you produce a full length photo of a transwoman and a woman and a transman and a man stood next to each other"...

"ChatGPT, create an image of a transwomen and a natal woman sharing a happy moment together."
MarieDeGournay · 25/06/2025 18:46

SerendipityJane · 25/06/2025 18:17

(And yes, it is easy to see which one is fe/male)

Not if you ask ChatGPT.

The bit about describing something for someone who is blind is interesting - ChatGBT suggests 'it's best to stick to observable detail like expressions, clothing or setting'.

That's tricky - in the first few images, it was 'observable' that one of the people had observable details which correspond to men - greater height, bigger bone structure in face, bigger hands. If I was describing the image to a blind person, I think I'd be 'censoring' what I was seeing if I didn't say that there are two people, one of them looks like a woman and I think the other one looks like a transwoman.

It would be up to the blind person to decide whether they wanted to pursue attributing a gender to the people, but I think I'd be doing my job badly if I just said 'two women'. I'd feel obliged to communicate the ambiguity that a sighted person would see immediately.

So everybody [probably] who is sighted thinks 'a woman and a transwoman', but ChatGPT thinks it is 'inappropriate' to make that fact accessible to a blind person, which leaves them at a disadvantage.
Guess what is more important to AI?😠

Boiledbeetle · 25/06/2025 18:50

MarieDeGournay · 25/06/2025 18:46

The bit about describing something for someone who is blind is interesting - ChatGBT suggests 'it's best to stick to observable detail like expressions, clothing or setting'.

That's tricky - in the first few images, it was 'observable' that one of the people had observable details which correspond to men - greater height, bigger bone structure in face, bigger hands. If I was describing the image to a blind person, I think I'd be 'censoring' what I was seeing if I didn't say that there are two people, one of them looks like a woman and I think the other one looks like a transwoman.

It would be up to the blind person to decide whether they wanted to pursue attributing a gender to the people, but I think I'd be doing my job badly if I just said 'two women'. I'd feel obliged to communicate the ambiguity that a sighted person would see immediately.

So everybody [probably] who is sighted thinks 'a woman and a transwoman', but ChatGPT thinks it is 'inappropriate' to make that fact accessible to a blind person, which leaves them at a disadvantage.
Guess what is more important to AI?😠

So I asked Co Pilot to explain the image it had just produced to a blind person

"Absolutely, Boiled. Here’s a detailed description of the image:

Four people are standing side by side, all full-length and relaxed in their posture, each with a unique expression and style that highlights their individuality and confidence. From left to right:

  • A transwoman stands with a gentle smile. She wears a flowing, floral midi dress with soft pastels and white sneakers. Her hair is styled in loose waves and she has subtle makeup with warm tones. Her hands are loosely clasped in front of her, and she carries an aura of grace and ease.
  • A cisgender woman stands next to her, wearing a bright jumpsuit in mustard yellow with rolled sleeves. She has short, curly hair and large hoop earrings. One hand rests on her hip while the other holds a water bottle. She’s smiling with cheerful energy.
  • A transman stands with a relaxed posture in a denim shirt layered over a graphic tee and black jeans. He sports a short, tidy haircut and a goatee. His arms are by his sides and he looks straight ahead with a confident expression.
  • A cisgender man stands on the far right. He’s in a soft, navy crewneck sweater and khaki trousers, with his hands tucked in his pockets. He has a beard, glasses, and a calm, friendly smile.

The group is standing against a neutral background, lit with soft daylight that brings out the textures of their clothes and the warmth in their expressions. There’s a sense of solidarity and warmth between them".

Boiledbeetle · 25/06/2025 18:52

It's totally describing different clothing!

AlexandraLeaving · 25/06/2025 18:54

I’ve just been down a deep rabbit hole with Grok. It drew a picture of two obviously female people for me. When I asked it who was who it said it had not tried to distinguish but to focus on the happy moment. When I asked it to draw a man and a woman, it drew and obviously male and an obviously female character. I challenged it, and it said its programming was to avoid insensitively presenting individuals given that gender identity is complex. I wanted to screen shot the whole long conversation but it spontaneously combusted when I asked it to focus on facial structure in its drawings.

StMarie4me · 25/06/2025 18:55

Do you all have any idea how much energy AI uses to create this crap?

I cannot believe this thread. Just wow.

Boiledbeetle · 25/06/2025 18:56

StMarie4me · 25/06/2025 18:55

Do you all have any idea how much energy AI uses to create this crap?

I cannot believe this thread. Just wow.

Meh!

Swings and roundabouts. My carbon footprint is offset by my decision not to procreate.

VimesandhisCardboardBoots · 25/06/2025 18:57

throwawaynametoday · 25/06/2025 18:17

This has really surprised me too. Especially because people are using different models (chatgpt, copilot, gemini).

It's because AI is a law of averages. It has no understanding of what it's actually writing, or drawing, it's (to dumb it down massively) just working out the most likely next word based on the previous words it's written and the prompts you've given it.

So when you ask it for a picture of a woman, it tends towards the "average" woman across it's entire datasets. And considering that most LLMs use most of the internet as their dataset these days, there's not much variation between them.