this post was submitted on 07 Aug 2025
160 points (97.1% liked)

Technology

73758 readers
4154 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Strider@lemmy.world 2 points 1 day ago (4 children)

Everything. As we as humanity learn more we recognize errors or wisdom with standing the test of time.

We could go into the definition of intelligence, but it's just not worth it.

We can just disagree and that's fine.

[–] Perspectivist@feddit.uk 0 points 1 day ago (3 children)

I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.

An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.

[–] Strider@lemmy.world 2 points 1 day ago (2 children)

Sorry, no. It's not intelligent at all. It just responds with statistical accuracy. There's also no objective discussion about it because that's how neural networks work.

I was hesitant to answer because we're clearly both convinced. So out of respect let's just close by saying we have different opinions.

[–] Perspectivist@feddit.uk 1 points 1 day ago* (last edited 1 day ago) (1 children)

I hear you - you're reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.

But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It's a side effect of having been trained on a lot of correct information, not the result of human-like understanding.

So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.

[–] Strider@lemmy.world 2 points 1 day ago

Thank you for the nice answer!

We can definetly agree on that it can provide intelligent answers without itself being an intelligence 👍