SAN FRANCISCO (AP) — A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.

The study in the medical journal published Tuesday by the American Psychiatric Association, found a need for “further refinement†in OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. 

°µÍø½ûÇø. All rights reserved.

More Health Stories

Sign Up to Newsletters

Get the latest from °µÍø½ûÇø News in your inbox. Select the emails you're interested in below.