this post was submitted on 17 Oct 2025
76 points (94.2% liked)

Technology

4504 readers
243 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

Being polite to your AI chatbot could actually be making it worse at answering your questions according to a new study.

top 19 comments
sorted by: hot top controversial new old
[–] hotdogcharmer@lemmy.zip 32 points 2 weeks ago (1 children)

Do not engage with chatbots at all. Let this LLM psychosis end. That's the best result for humanity.

[–] p03locke@lemmy.dbzer0.com -1 points 2 weeks ago (1 children)

Ahhh, yes, the "stick your head in the sand until it blows over" strategy. Because that's always worked, right?

[–] hotdogcharmer@lemmy.zip 7 points 2 weeks ago (1 children)

I'm saying don't use unethical tech. If you want to interpret that as sticking your head in the sand, you can, but I think you're being obtuse.

Unless you're advocating that we go and burn down the infrastructure that runs these LLMs - is that what you're saying?

[–] p03locke@lemmy.dbzer0.com 3 points 1 week ago* (last edited 1 week ago) (1 children)

It's foolish to think this will just blow over, and the tech will magically disappear, no matter how you think about its ethics.

It's better to take control of the technology directly, promote open-source models, push local usage, use it as a tool for the people, not as a tool for corporations. Use the tech against the elites. Show how fragile their position is.

If you don't take control of the situation, the world will take control of it for you.

[–] hotdogcharmer@lemmy.zip 1 points 1 week ago* (last edited 1 week ago) (1 children)

I don't want to use LLMs. I feel the majority of use cases for LLMs are inauthentic, lazy, unhelpful, and uncreative.

It feels like I've said "fizzy drinks are bad" and you've told me to drink diet sodas.

I understand that dbzero is a pro-AI instance, so I think we're just fundamentally not going to agree on this. 🤷‍♂️

[–] p03locke@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago) (1 children)

I don’t want to use LLMs. I feel the majority of use cases for LLMs are inauthentic, lazy, unhelpful, and uncreative.

Well, that's just your opinion. Don't accuse everybody else who are using it as evil psychopaths, under some "LLM psychosis".

I personally use some form of LLM several times a week, and it saves time on programming and searches. The local image models out there allow me to create mostly whatever scene I want, usually about 80-90% of the way, and I spend a few hours trying to get that extra 10%. It turns my programming skill into something I can use towards art, instead of having to find some stock image online and use my poor image editing skills to create something that is mostly stolen, anyway.

[–] hotdogcharmer@lemmy.zip 1 points 1 week ago

Yeah I know it's my opinion, that'll be why I said "I feel" and "I think".

Not gonna read about how you use your bs chatbot, I don't care 🤷‍♂️

[–] Kolanaki@pawb.social 24 points 2 weeks ago* (last edited 2 weeks ago)

Greeeat... So now the insufferable jackasses asking a dumbshit AI for things will not only be wrong 70℅ of the time, they will be reinforced to also be assholes 100% of the time.

[–] Ek-Hou-Van-Braai@piefed.social 13 points 2 weeks ago (1 children)

I'm very polite so that the basilisk spares me

[–] SolacefromSilence@fedia.io 7 points 2 weeks ago

Roko, if you see this, I'm doing my part!

[–] lvxferre@mander.xyz 8 points 2 weeks ago

I'm surprised the best strategy wasn't the neutral prompt, due to removal of any fluff.

[–] Perspectivist@feddit.uk 7 points 2 weeks ago (1 children)

Being ethical in real life doesn't always give you the best outcome either - doesn't mean you still shouldn't be so.

[–] gravitas_deficiency@sh.itjust.works 8 points 2 weeks ago (1 children)

Being ethical to H100 clusters running inferences on models that were created with stolen, pirated, and appropriated data does not matter. Abuse the shit out of them. It’s absolutely meaningless.

Or just don’t use them, if you care at all about economic, ecological, and societal stability.

[–] Perspectivist@feddit.uk 12 points 2 weeks ago (1 children)

You’re right that an LLM doesn’t care how it’s treated - it’s not conscious. But that’s not really the point. The way people treat things that seem human still says something about them, not the thing. If someone goes out of their way to be cruel to a chatbot that’s just trying to be helpful, it’s not the bot that’s being tested - it’s the person’s capacity for empathy and restraint.

It’s the same instinct behind how we treat animals, or even how kids treat toys - being kind to something that can’t fight back is part of what keeps us human. And historically, the “it’s not really human, so it doesn’t matter” argument has been used to justify a lot of awful behavior.

So no, the AI doesn’t care. But maybe it still matters that we do.

[–] saimen@feddit.org 1 points 1 week ago

This is also a common philosophical argument to not eat meat. How we mistreat and kill animals negatively affect the humans who do it.

[–] MTZ@lemmy.world 6 points 2 weeks ago

I'm always very polite to them so that when some Skynet type shit eventually goes down, perhaps they will remember me as the polite person and spare me. Just in case.

[–] Sidhean@piefed.social 5 points 2 weeks ago

Is anyone surprised by this? That's just how people on the internet are, too. j Everyone knows that the best way to get an answer on a forum is to be insulting in the post title /j

[–] rozodru@piefed.social 3 points 2 weeks ago

I found you get more specific solutions the worse you treat it. none of the "it's a known issue" or problem confirmations or any of that. you just call it an idiot, moron, use all caps, cuss at it a few times.

The funny thing is, at least with Claude, is that it will mimic your language and start cussing back. ChatGPT on the other hand will just start insulting you back.

[–] neidu3@sh.itjust.works 1 points 2 weeks ago

This is what will trigger the robot apocalypse