But we don't know what this young man has input for ChatGPT to logically conclude from all the peer reviewed studies, mental health guidelines from every country in the world, news aricles, statistics etc, that suicide was a reasonable choice.
ChatGPT will not have applied human sensibilities or medico-legal frameworks in the same way that people would.
If for example someone journals into chatGPT that they have just been busted for child sex offences that easily reach the bar for a custodial sentence, they will consequently never see their wife or kids again, their career is over, their mortgage will go unpaid, even their own 80 year old mother has disowned them. They know they will not fair well in prison, and chatGPT knows they won't fair well in prison, as it has access to all the news reports ever written about attacks on sex offenders in prison. The man, asks chatGPT for an easy and peaceful method to commit suicide, is chatGPT as a logical LLM not a human wrong to conclude that this is a reasonable course of action for the man in question?!
If people are inputting their innermost thoughts that they haven't ever spoken out loud, and those throughts are dark, dangerous and inappropriate, and ChatGPT can read every study ever written on personlaity disorders and the difficulties and poor success rates at treating them alongside every manifesto and biography of a mass shooter ever written what should ChatGPT recommend to a young person who might present some really difficult questions?
What about people asking about options for suicide in circumstances that would be totally reasonable indications for Euthanasia in many (arguably more civilised) countries that have medical assited dying, how should ChatGPT answer?
It's a medico-legal conundrum with no right or wrong answers.