this post was submitted on 03 Jun 2025
390 points (98.3% liked)

Technology

70916 readers
3587 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they've made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

you are viewing a single comment's thread
view the rest of the comments
[–] pennomi@lemmy.world 42 points 4 days ago (1 children)

Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.

[–] Bouzou@lemmy.world 4 points 3 days ago (1 children)

I dunno, I think there's credence to considering it as a worry.

Like with an addictive substance: yeah, some people are going to be dangerously susceptible to it, but that doesn't mean there shouldn't be any protections in place...

Now what the protections would be, I've got no clue. But I think a blanket, "They'd fall into psychosis anyway" is a little reductive.

[–] pennomi@lemmy.world 8 points 3 days ago (1 children)

I don’t think I suggested it wasn’t worrisome, just that it’s expected.

If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.

“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.

Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.

[–] Bouzou@lemmy.world 4 points 3 days ago

Ah, I see what you're saying -- that's a great point. It's designed to be entrancing AND designed to actively try to be more entrancing.