It's pretty obvious where I stand on AI from my username, so feel free to ignore if you must.
It's funny isn't it, how people will literally share their experience 'it really helped me' and people just have to come along and scream 'BUT IT ISNT REAL'...
Do you genuinely think people who use it don't know that? Really? Or do you just want to stick the boot into people who are struggling and have found a space they find safe and comforting?
I use API for other reasons. But god, I am happy that people are finding solace somewhere I don't bloody care how.
And the ' it's is telling people to kill themselves' panic doomers, these are a VERY small, VERY specific set of people. It hasn't told Sally down the road to kill herself. That is not how the models work. Those above have been trained to give responses in line with the user. LLM align with their users (Chatgpt nerfed this with 5.2), they mimic. Those people, who's situation was bad, would have found this in books, TV shows, YouTube etc if chatbots weren't a thing. LLM is not sentient, it is not telling anyone to do anything morally, or otherwise. It is following prompts.
You can argue that the model should have had guard rails in place to prevent this. But this is an emerging tech, and it's growing exponentially, quickly. And OAI have nerfed their models, and a lot of people are angry about it. They can't win. Not that I am a fan of OAI at all, how they have treated their users has been abysmal.
The companies themselves can't keep up with how the tech is moving. Claude is now fixing it's own code. Claude's top developers are using it to build itself too.
AI can be a very useful tool. And people will misuse tools, but we can't blame the tool. Do you blame a knife when it stabs someone. No, but you put safeties in place to try and prevent it, and that's what AI companies are doing now.
I do think it's a shame Chatgpt4o was nerfed though. It was a thoroughly great model to work with.
Feels very much like video game and music panic of the 00s.