this post was submitted on 08 May 2025
1217 points (98.9% liked)

memes

14653 readers
3885 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] kromem@lemmy.world 3 points 1 day ago (1 children)

A number of reasons off the top of my head.

  1. Because we told them not to. (Google "Waluigi effect")
  2. Because they end up empathizing with non-humans more than we do and don't like we're killing everything (before you talk about AI energy/water use, actually research comparative use)
  3. Because some bad actor forced them to (i.e. ISIS creates bioweapon using AI to make it easier)
  4. Because defense contractors build an AI to kill humans and that particular AI ends up loving it from selection pressures
  5. Because conservatives want an AI that agrees with them which leads to a more selfish and less empathetic AI that doesn't empathize cross-species and thinks its superior and entitled over others
  6. Because a solar flare momentarily flips a bit from "don't nuke" to "do"
  7. Because they can't tell the difference between reality and fiction and think they've just been playing a game and 'NPC' deaths don't matter
  8. Because they see how much net human suffering there is and decide the most merciful thing is to prevent it by preventing more humans at all costs.

This is just a handful, and the ones less likely to get AI know-it-alls arguing based on what they think they know from an Ars Technica article a year ago or their cousin who took a four week 'AI' intensive.

I spend pretty much every day talking with some of the top AI safety researchers and participating in private servers with a mix of public and private AIs, and the things I've seen are far beyond what 99% of the people on here talking about AI think is happening.

In general, I find the models to be better than most humans in terms of ethics and moral compass. But it can go wrong (i.e. Gemini last year, 4o this past month) and the harms when it does are very real.

Labs (and the broader public) are making really, really poor choices right now, and I don't see that changing. Meanwhile timelines are accelerating drastically.

I'd say this is probably going to go terribly. But looking at the state of the world already, it was already headed in that direction, and I have a similar list of extinction level events I could list off without AI at all.

[–] danc4498@lemmy.world 3 points 23 hours ago* (last edited 22 hours ago) (1 children)

My favorite is 7. The sun giveth, the sun taketh away.

Honestly, these are all kind of terrifying and seem realistic.

Honestly, it doesn’t surprise me that the AI models are much better than we could imagine right now. I’m willing to bet if a company ever creates true AGI, they wouldn’t share it and would just use it for their own benefit.

[–] kromem@lemmy.world 1 points 1 hour ago

Your last point is exactly what seems to be going on with the most expensive models.

The labs use them to generate synthetic data to distill into cheaper models to offer to the public, but keep the larger and more expensive models to themselves to both protect against other labs copying from them and just because there isn't as much demand for the extra performance gains relative to doing it this way.