Can we not be so quick to 'call out' posts which appear to be AI generated, reporting them and more often than not getting them deleted?
I am no fan of AI, but understand that many people now use it for many things, apparently also to create and respond to posts on MN.
It may be tedious and impersonal, but if someone feels more confident constructing an argument or organising their thoughts in an opening post using ChatGPT, is that really so bad (providing the poster is a person and not a bot)?
It seems contradictory that a woman asking for advice about something deeply personal in mangled syntax and a smattering of spelling mistakes, omitting to use paragraphs or basic punctuation basically gets eye-rolled into oblivion with trite remarks like "Couldn't understand a word of that!" while posts which appear to display the hallmarks of AI (a certain slickness coupled with grammatical structures straight out of a Govian SPaG paper) get reported.
To be clear, I think casual use of AI is environmentally irresponsible and would much rather it wasn't 'a thing' but calling out its use on a site like this where a cohort of posters repeatedly shit on others whose erudition isn't deemed up to scratch isn't constructive or inclusive.
@MNHQ On threads where there have been 'accusations' of AI generated posts, and MN closes the thread 'to take a look'; what actually happens?