A new study has found that ChatGPT 5 is providing inaccurate or dangerous responses in mental health crisis situations. In scenarios involving psychosis, suicidal thoughts and delusions, the model provided more harmful suggestions than helpful ones. Experts have called for stricter rules and stronger AI guardrails.
Source link

