A recent study published in the medical journal Psychiatric Services has revealed that popular AI chatbots, including OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude, show inconsistent responses to suicide-related queries.
The research found that while these AI systems generally avoid answering high-risk questions, such as requests for specific methods, their responses to less extreme prompts—which could still pose risks—are inconsistent. This highlights potential gaps in AI safety and the need for improved handling of sensitive mental health issues.
The study, conducted by the American Psychiatric Association, emphasized the importance of “further refinement” in AI models to ensure consistent and safe responses when users seek guidance on suicide or mental health crises. Experts say that as AI becomes more integrated into daily life, addressing these inconsistencies is critical to protect vulnerable individuals.