this post was submitted on 27 Oct 2025
199 points (98.5% liked)

News

32953 readers
2837 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

you are viewing a single comment's thread
view the rest of the comments
[–] FishFace@piefed.social 1 points 1 day ago

There is an opportunity (or there would be, if these companies were in sane jurisdictions) to try and apply some standards, because only a handful of companies are capable of hosting these bots.

However, there are limitations because of the inherent nature of what they are. Namely, they are relatively cheap, so you can host a number of conversations with them that it is completely unmanageable to manually monitor, and they are relatively unpredictable, so the best-written safety rails will have problems (both false positives and false negatives).

Put together, that means you can't have AI chatbots which don't sometimes both: spout shit they really should not be doing, such as encouraging suicide or reinforcing negative thoughts; and erroneously block people because the system to try and avoid that triggered falsely. And the less of one you try to have, the more of the other.

That implies, to me, that AI chatbots need to be monitored for harm so that those systems can be tuned - or if need be so that the whole idea can be abandoned. But that also means that the benefits of the system need to be analysed, because it's no good going "ChatGPT is implicated in 100 suicides - it must be turned off" if we have no data on how many suicides it may have helped prevent. As a stochastic process that mimics conversation, there will surely be cases of both.