X said: "We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
"Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."
You might think it's insufficient. I might think it's insufficient. But that is not "refuses to do anything about it".
That is exactly how all responsible social media platforms deal with illegal content.
Grok is a generative AI model. It has not been programmed with some feature to remove clothes. It's a byproduct of its general capabilities. Powerful GenAI models including Gemini (Google) and Dall-E have the same problem. Not to mention open-weight models that anyone can run and therefore anyone can do the same thing with (I don't hear any clamour on this thread to ban those... but why not?)
This is a very new technology and we are grappling with how to deal with it, as a society, to mitigate the harms while capturing the benefits. Different societies, and different people in society, are going to come to different provisional positions on how that should be done. As the technology evolves and as we come to terms with it, I'm sure our median position will evolve and we will work out ways to handle this. We are in for a wild ride over the coming years as AI evolves.
These are serious debates and serious harms, as well as major benefits.
Leaping to "X didn't shut it down so anyone who doesn't want X banned supports child abuse" is not taking this seriously. At least, not unless they are also advocating for the banning of Google (Gemini produces AI nudification images, yet Google haven't pulled the product), Meta (WhatsApp is end-to-end encrypted, making it a great tool for sexual abusers as well as many other types of very serious criminal, yet Meta refuses to shut down that feature), Apple (they refuse to build a back-door into their devices for law enforcement, even highly-restricted systems for security services who are investigating the most severe threats), and a host of other household names. For that matter, at this point every cheap maker of digital cameras could implement software to prevent them taking photos that are high-liklihood of being CSAM, yet (afaik) not a single one does. Why not? If you aren't clamouring for a ban on all digital camera-makers, does that mean you....? etc.
I don't like how X has handled this. But the leap to "ban X or you're a paedo" is just stupid.
I would in fact like to know if @Alexandra2001 uses any of the following, just so I can make the inevitable, infantile, accusation and we can be all square: