this post was submitted on 24 Oct 2025
201 points (100.0% liked)
science
22237 readers
716 users here now
A community to post scientific articles, news, and civil discussion.
rule #1: be kind
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I genuinely don’t understand the impulse to tell the AI it was wrong or to give it a chance to clarify. It can’t learn from its mistakes. It doesn’t even understand the concept of a mistake.
It's for the same reason you'd refine your query in an old-school Google Search. "Hey, this is wrong, check again" often turns up a different set of search results that are then shoehorned into the natural language response pattern. Go fishing two or three times and you can eventually find what you're looking for. You just have to "trust but verify" as the old saying goes.
It understands the concept of not finding the right answer in the initial heuristic and trying a different heuristic.
It may have been programmed to try a different path when given a specific input but it literally cannot understand anything.
It doesn't need to understand anything. It just needs to spit out the answer I'm looking for.
A calculator doesn't need to understand the fundamentals of mathematical modeling to tell me the square root of 144. If I type in 143 by mistake and get a weird answer, I correct my inputs and try again.
Calculators also don’t misinterpret things %45 of the time.