this post was submitted on 24 Oct 2025
199 points (100.0% liked)

science

22219 readers
646 users here now

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

founded 2 years ago
MODERATORS
 

Relatively new arXiv preprint that got featured on Nature News, I slightly adjusted the title to be less technical. The discovery was done using aggregated online Q&A... one of the funnier sources being 2000 popular questions from r/AmITheAsshole that were rated YTA by the most upvoted response. Study seems robust, and they even did several-hundred participants trials with real humans.

A separate preprint measured sycophancy across various LLMs in a math competition-context (https://arxiv.org/pdf/2510.04721), where apparently GPT-5 was the least sycophantic (+29.0), and DeepSeek-V3.1 was the most (+70.2)

The Nature News report (which I find a bit too biased towards researchers): https://www.nature.com/articles/d41586-025-03390-0

top 14 comments
sorted by: hot top controversial new old
[–] Scubus@sh.itjust.works 1 points 8 hours ago

That was my original frustration with the llms. It was pointless to use them to bounce ideas off of because they would assume you were right when you were just stating a theory. I dont actively use an of the llms, but i believe google uses gemini and it seems to be more willing to tell me im wrong these days. I was using it to better my understanding of superconductors and bouncing some theories off of it, and it seemed very determined to stick to proven physics. More specifically, i was trying to run some theories on emulating cooper pairs at hogher tempertures and it was having none of it. Definitely an improvement over how they used to be.

[–] UnderpantsWeevil@lemmy.world 42 points 1 day ago (2 children)

Honestly, one of the more annoying aspects of AI is when you get certain information, verify it, find out the information is inaccurate, go back to the AI to say "This seems wrong, can you clarify?" and have it respond "OMG, yes! You're such a smart little boy for catching my error! Here's some more information that may or may not be correct, good luck!"

I'm not even upset that it gave me the wrong answer the first time. I'm annoyed that it tries to patronize me for correcting its own mistakes. Feels like I'm talking to a 1st grade teacher who is trying to turn "I fucked up" into "You passed my test because you're so smart".

[–] UnderpantsWeevil@lemmy.world 25 points 1 day ago (2 children)
[–] BakerBagel@midwest.social 21 points 1 day ago

Reminds me of a tweet from a few years ago that said something along the lines of "Middle managers think AI is intelligent because it speaks just like they do instead of realizing it means that they aren't intelligent "

[–] BussyGyatt@feddit.org 4 points 1 day ago

it really do be like that

[–] atomicbocks@sh.itjust.works 10 points 1 day ago (1 children)

I genuinely don’t understand the impulse to tell the AI it was wrong or to give it a chance to clarify. It can’t learn from its mistakes. It doesn’t even understand the concept of a mistake.

[–] UnderpantsWeevil@lemmy.world 0 points 23 hours ago (1 children)

I genuinely don’t understand the impulse to tell the AI it was wrong or to give it a chance to clarify.

It's for the same reason you'd refine your query in an old-school Google Search. "Hey, this is wrong, check again" often turns up a different set of search results that are then shoehorned into the natural language response pattern. Go fishing two or three times and you can eventually find what you're looking for. You just have to "trust but verify" as the old saying goes.

It doesn’t even understand the concept of a mistake.

It understands the concept of not finding the right answer in the initial heuristic and trying a different heuristic.

[–] atomicbocks@sh.itjust.works 7 points 23 hours ago (1 children)

It may have been programmed to try a different path when given a specific input but it literally cannot understand anything.

[–] UnderpantsWeevil@lemmy.world 1 points 23 hours ago (1 children)

It doesn't need to understand anything. It just needs to spit out the answer I'm looking for.

A calculator doesn't need to understand the fundamentals of mathematical modeling to tell me the square root of 144. If I type in 143 by mistake and get a weird answer, I correct my inputs and try again.

[–] Broadfern@lemmy.world 22 points 1 day ago (1 children)

“Software specifically engineered to be digital crack is bad for people who use it and good for the profits of digital crack dealers”

I think AI needs to go in the pile with freemium/gacha games and FIFA/2K/CoD/all the other dark pattern garbage at this point.

[–] EightBitBlood@lemmy.world 8 points 1 day ago

Can you put capitalism in there too as this describes the same thing?

Find it hilarious that this study is like: "Telling people yes makes them addicted to hearing it"

When it's literally the reason every billionaire is a sociopath. Everyone tells them yes, because they have all the money, and it makes them think they're great. So they aquire more money.

So of course they invented AI to work the same way.

South Park made a pretty good episode about this recently.

[–] samus12345@sh.itjust.works 2 points 1 day ago