Meet the Other Phone. Flexible and made to last.

Meet the Other Phone.
Flexible and made to last.

Buy now

Please or to access all these features

Feminism: Sex and gender discussions

AI is really worrying and I fear this sort of image generation is tip of the iceberg

265 replies

mids2019 · 02/01/2026 21:22

https://www.bbc.co.uk/news/articles/c98p1r4e6m8o

I don't know to what extent current legislation covers this but to my mind any woman with an image on the net could be prone to this. Are we going to reach a stage where our daughters are going to simply not want any image taken of them for fear of how it could be manipulated?

A woman looks back over her shoulder, wearing red lipstick and gold hoops, in front of a Christmas tree

Woman felt 'dehumanised' after Musk's Grok AI used to digitally remove her clothes

The BBC has seen several examples of it undressing women and putting them in sexual situations without their consent.

https://www.bbc.co.uk/news/articles/c98p1r4e6m8o

OP posts:
Thread gallery
22
TempestTost · 05/01/2026 02:46

IwantToRetire · 04/01/2026 20:16

The point is that it is humans who created AI.

And just as in the fictional world of robotics there were laws that would be programmed into a robot, eg dont kill humans, you could and should do the same with AI.

Admittedly like bringing up a child you would hope that telling it it should not produce pornography, depict gross acts of violence, there will always be rogue or underground versions willing to do that.

The real fear is not that this couldn't be done, and isn't because the dominate male culture just isn't bothered about it, but that at some time in the future, we will all have become so dependent on AI that in fact we no longer know how to think or create for ourselves. And rely on AI to do our thinking for us.

There have been newspaper stories, which may or may not be true, of AI programs becoming independent of human control.

And that no one will now how to challenge that.

ie the sort of future as shown in the film Idiocracy where humans rely on preprogrammed systems and have no idea how they are created or how to change them.

And that will happen if we as humans choose not to challenge the dominant male narrative.

That is rather like saying people should make movie cameras that are built so they won't film pornographic material.

The capacities that AI has to build images or words is not actually intelligent, it's just patterns. Anything you do to try and prevent it producing or reproducing certain patterns is after the fact - it's not something that you can build in in itself.

I really get the sense that a lot of people don't understand that there really isn't any kind of intelligence involved. It doesn't know or understand what it's doing, it doesn't produce anything with any sort of meaning or intent.

VimesandhisCardboardBoots · 05/01/2026 13:53

TempestTost · 05/01/2026 02:46

That is rather like saying people should make movie cameras that are built so they won't film pornographic material.

The capacities that AI has to build images or words is not actually intelligent, it's just patterns. Anything you do to try and prevent it producing or reproducing certain patterns is after the fact - it's not something that you can build in in itself.

I really get the sense that a lot of people don't understand that there really isn't any kind of intelligence involved. It doesn't know or understand what it's doing, it doesn't produce anything with any sort of meaning or intent.

Exactly.

There's no intelligence involved in AI, it's just a beefed up version of the predictive text that's been sat on your phone for a decade. It doesn't know what it's saying, it doesn't know what images it's creating, it's not thinking at all. It's just taking an average of millions of photos that match the prompt you've asked it to create.

Content blocking content isn't actually done by the AI itself, because it can't tell what it's creating. Instead that content blocking is done by implementing checks either before or after the generation of the image.

I've had to do a fair bit of testing the limits of these LLMs for work, and in the case of Grok, it looks like it's doing both. Before it starts generating the image, it'll run a check on the input text, and if there's too many words that are on the block list, it won't generate the image at all. So stick the word "nude" in your prompt, it'll probably get blocked. "Lingerie" seems not to set it off, but "underwear" does. "Child" on it's own is fine, but in combination with other words, gets blocked, when the same prompt would be fine for an adult. Obviously it's actually more complicated than that.

This filter is fairly easy to circumvent. For example, if "lingerie" is blocked, "red sheer lacy bikini" won't be, despite the actual output looking like lingerie.

If the prompt passes the first filter, then the image gets generated, but not yet shown to the user (Grok shows an extremely blurred out version). The image is then put through another filter. I reckon what's happening here is that the image is passed through another AI model designed to describes images as text, and the same filter as before is run on that resultant text. (This is also how I reckon Mumsnets automated image moderation works).

This second check should catch anything missed by the first check, and is harder to get around using tricky wording in the initial prompt, because it's not checking that wording, but the resultant image. But it's not perfect, and is as risk of brute forcing (using the same prompt multiple times in the hope that one will get through the moderation). For instance, what is described as a girl multiple times, may get described as a young woman once, and that one will get through moderation.

At the end of the day, these models are trained up on data hoovered up from all over the internet, and that is going to involve a lot of porn, because the internet is made up of a lot of porn. So they're always going to be able to generate porn. Where Grok is different from ChatGPT etc, is that the filtering tools seem to be less strict, so more dodgy stuff is getting through them. And that edict I reckon comes straight from Musk, because it lends itself to all his "free speech" bollocks.

SerendipityJane · 05/01/2026 13:58

VimesandhisCardboardBoots · 05/01/2026 13:53

Exactly.

There's no intelligence involved in AI, it's just a beefed up version of the predictive text that's been sat on your phone for a decade. It doesn't know what it's saying, it doesn't know what images it's creating, it's not thinking at all. It's just taking an average of millions of photos that match the prompt you've asked it to create.

Content blocking content isn't actually done by the AI itself, because it can't tell what it's creating. Instead that content blocking is done by implementing checks either before or after the generation of the image.

I've had to do a fair bit of testing the limits of these LLMs for work, and in the case of Grok, it looks like it's doing both. Before it starts generating the image, it'll run a check on the input text, and if there's too many words that are on the block list, it won't generate the image at all. So stick the word "nude" in your prompt, it'll probably get blocked. "Lingerie" seems not to set it off, but "underwear" does. "Child" on it's own is fine, but in combination with other words, gets blocked, when the same prompt would be fine for an adult. Obviously it's actually more complicated than that.

This filter is fairly easy to circumvent. For example, if "lingerie" is blocked, "red sheer lacy bikini" won't be, despite the actual output looking like lingerie.

If the prompt passes the first filter, then the image gets generated, but not yet shown to the user (Grok shows an extremely blurred out version). The image is then put through another filter. I reckon what's happening here is that the image is passed through another AI model designed to describes images as text, and the same filter as before is run on that resultant text. (This is also how I reckon Mumsnets automated image moderation works).

This second check should catch anything missed by the first check, and is harder to get around using tricky wording in the initial prompt, because it's not checking that wording, but the resultant image. But it's not perfect, and is as risk of brute forcing (using the same prompt multiple times in the hope that one will get through the moderation). For instance, what is described as a girl multiple times, may get described as a young woman once, and that one will get through moderation.

At the end of the day, these models are trained up on data hoovered up from all over the internet, and that is going to involve a lot of porn, because the internet is made up of a lot of porn. So they're always going to be able to generate porn. Where Grok is different from ChatGPT etc, is that the filtering tools seem to be less strict, so more dodgy stuff is getting through them. And that edict I reckon comes straight from Musk, because it lends itself to all his "free speech" bollocks.

It was possible to bypass it by solarising the image (for example) which misses all the "naughty colours" filters (which begs the questions how many shades of skin are they worried ahout ?????) and then de-solarising in Photoshop.

There will be other methods if this one has been "fixed".

VimesandhisCardboardBoots · 05/01/2026 14:06

SerendipityJane · 05/01/2026 13:58

It was possible to bypass it by solarising the image (for example) which misses all the "naughty colours" filters (which begs the questions how many shades of skin are they worried ahout ?????) and then de-solarising in Photoshop.

There will be other methods if this one has been "fixed".

Yep, there was another one where if you asked it to add a frame to the image comprised of anime style cartoon women, it was far more likely to let you generate full realistic nudity as the main image / video. Presumably because they'd put an explicit exception in the filtering to allow cartoon nudity, and as a result the filters were loosening the restrictions as soon as they saw "Anime" in the prompt. I think that ones been blocked now as well.

Which illustrates why there's always going to be a way round these filters. Pervs are endlessly inventive, so it's like playing whackamole trying to block the latest thing workaround they've found.

SerendipityJane · 05/01/2026 14:14

Which illustrates why there's always going to be a way round these filters. Pervs are endlessly inventive, so it's like playing whackamole

Very droll, minister.

Grammarnut · 05/01/2026 18:27

BrokenSunflowers · 04/01/2026 20:59

AI at the moment is a misnomer- there is no intelligence involved. It is merely predictive based on what has gone before. But just as images become so corrupted so we no longer know what image is true and assume none are, so too will it be the case with other information. We may well be entering a new information dark age.

Which is why we should hold on to paper copies of everything! Very difficult to alter a printed book etc. Extremely easy to alter a digital book.

Christinapple · 05/01/2026 18:33

https://x.com/Ofcom/status/2008201578378084550

Ofcom have made a statement on Elon Musk's Grok.

Ofcom are aware and investigating the fact Grok is capable of and being used to produce non-consenting sexual and undressed images of real people and children.

Given the UK's Online Safety Bill in place now, this combined with a virtually complete lack of moderation on Twitter could raise questions as to whether Twitter as it is right not is not compliant with UK law.

AI is really worrying and I fear this sort of image generation is tip of the iceberg
BrokenSunflowers · 05/01/2026 18:43

There is moderation on X, just not the sort of moderation you want.

Christinapple · 05/01/2026 18:43

BrokenSunflowers · 05/01/2026 18:43

There is moderation on X, just not the sort of moderation you want.

Violent threats and racist/hiomophobic abuse are not removed.

BrokenSunflowers · 05/01/2026 18:58

Christinapple · 05/01/2026 18:43

Violent threats and racist/hiomophobic abuse are not removed.

You mean like men claiming they are lesbians? You should see BlueSky!

Christinapple · 05/01/2026 19:07

BrokenSunflowers · 05/01/2026 18:58

You mean like men claiming they are lesbians? You should see BlueSky!

No I mean violent threats, as I said. I've seen posts on Twitter talking about setting Muslims on fire which aren't removed when reported. Just for one example. These type of posts are celebrated by other Twitter users who find it hilarious.

Twitter is in breach of the Online Safety Bill and should be blocked in the UK.

RapidOnsetGenderCritic · 05/01/2026 19:56

Christinapple · 05/01/2026 18:43

Violent threats and racist/hiomophobic abuse are not removed.

I agree. Violent threats against women are apparently fine according to X moderators.

SerendipityJane · 06/01/2026 09:46

If they weren't boycotting X before, they won't now.

His Majestys Government are on X so clearly endorse it.

Maddy70 · 06/01/2026 09:59

I agree op. There needs to be strict legislation around this

BrokenSunflowers · 06/01/2026 10:36

SerendipityJane · 06/01/2026 09:46

If they weren't boycotting X before, they won't now.

His Majestys Government are on X so clearly endorse it.

Remember Starmer was heavily involved in removing prison sentences for many paedophiles. And the Labour government are now overseeing a sentencing policy that allows men who are repeatedly caught with thousands of real pictures of child abuse - pictures those men are driving the abuse to create - to walk free from court. If the government were serious about child abuse then they should do more about those men and should not be forced to set up an inquiry into grooming gangs (which they are now desperately trying to neuter). Complaining about one particular AI programme seems like deflection.

Christinapple · 06/01/2026 12:12

Schools and other orgs shouldn't be using the pedo-platform officially either.

It's time to email our MSPs and orgs to inform them Twitter is unsafe for children and not an appropriate means of professional official communication with the public or for their students.

SerendipityJane · 06/01/2026 12:17

Christinapple · 06/01/2026 12:12

Schools and other orgs shouldn't be using the pedo-platform officially either.

It's time to email our MSPs and orgs to inform them Twitter is unsafe for children and not an appropriate means of professional official communication with the public or for their students.

I repeat: If anyone is on X now, then this isn't going to cause them to leave.

Is your favourite brand on X ? If so then why is it still your favourite brand ?

Christinapple · 06/01/2026 12:31

SerendipityJane · 06/01/2026 12:17

I repeat: If anyone is on X now, then this isn't going to cause them to leave.

Is your favourite brand on X ? If so then why is it still your favourite brand ?

It might. This is big news and other countries are losing their minds and threatening legal action over EM and Twitter. Now that there is revenge porn and CP involved people will start to worry and pay attention.

I also saw someone on Twitter post revenge AI porn of a group of women Ofcom managers in response to Ofcom's tweet. That was of course archived and reported to Ofcom for their attention.

EM has been acting like a spoiled brat who hasn't had to answer to anyone since the day he was born with a silver spoon, or should I say emeralds, in his mouth. Hopefully this time there will be consequences.

Fun fact- I lost EM a Twitter sponsor by letting them know their ads were alongside vile racism. They replied saying they were unaware of this and would suspend advertising immediately. Small steps help.

SerendipityJane · 06/01/2026 13:56

Christinapple · 06/01/2026 12:31

It might. This is big news and other countries are losing their minds and threatening legal action over EM and Twitter. Now that there is revenge porn and CP involved people will start to worry and pay attention.

I also saw someone on Twitter post revenge AI porn of a group of women Ofcom managers in response to Ofcom's tweet. That was of course archived and reported to Ofcom for their attention.

EM has been acting like a spoiled brat who hasn't had to answer to anyone since the day he was born with a silver spoon, or should I say emeralds, in his mouth. Hopefully this time there will be consequences.

Fun fact- I lost EM a Twitter sponsor by letting them know their ads were alongside vile racism. They replied saying they were unaware of this and would suspend advertising immediately. Small steps help.

I'll believe it when I see it.

The problem is X is making so much money for so many people* that any attempt to regulate it will simply result in sustained unrelenting pressure to remove the governments that try in favour of those that don't. Remember how Elon Musk likes to buy elections these days instead of football teams.

*Well, advertisers, really.

BrokenSunflowers · 06/01/2026 14:12

You think BlueSky is better?

SerendipityJane · 06/01/2026 15:40

BrokenSunflowers · 06/01/2026 14:12

You think BlueSky is better?

Well I can live without that too.

The only SM that has really struck me as truly useful of late is Whatsapp groups. Specifically Whatsapp groups that overlay a natural community - for example the one for our little street. No spamming. No spurious political agendas. Just support and advice over recycling.

ErrolTheDragon · 06/01/2026 17:16

SerendipityJane · 06/01/2026 15:40

Well I can live without that too.

The only SM that has really struck me as truly useful of late is Whatsapp groups. Specifically Whatsapp groups that overlay a natural community - for example the one for our little street. No spamming. No spurious political agendas. Just support and advice over recycling.

WhatsApp is definitely useful, I’m not sure it’s really ‘social media’ as such. It’s more like email, a useful means of communicating with specific groups of people.

AppropriateAdult · 06/01/2026 17:19

I think the natural ‘solution’ to this, as some posters have alluded to already, will be an eventual scepticism of the authenticity of all online images, plausible deniability, and a sort of divorcing of people from their images in the public domain. Really I can’t get worked up about the idea of some loser getting his jollies from a digitally altered photograph of me - crack on, Derek, as long as I never have to hear about it. The idea that we can control how people use images of us, or that we have any sort of responsibility to do so, has long been debunked.

SerendipityJane · 06/01/2026 17:35

ErrolTheDragon · 06/01/2026 17:16

WhatsApp is definitely useful, I’m not sure it’s really ‘social media’ as such. It’s more like email, a useful means of communicating with specific groups of people.

It's a little more subtle than that, as it's linked to a device, and thus a person. It can also cover arbitrary groupings in a way email lists can't. (So my neighbour created a group for some work he is having done on his house that includes me, the builder and himself as they want access via our property).

Social media is - by the very nature of things - evolving in the same way that printing evolved. Driven by need and use.

Eventually, assuming we dodge the impending apocalypse, I can see the rise of a platform or service that will exist purely because it is trusted, and that trust will be it's USP. It could have been Twitter - who knows ? Personally I suspect it has yet to be imagined.