Emojis are for chatting/texting. I don't even want to see them here.
Perspectivist
Samsung default one. It's funny when I'm at the hardware store buing supplies for work in the morning and a phone rings there's like 5 builders who all check their phones.
I hear you - you're reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.
But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It's a side effect of having been trained on a lot of correct information, not the result of human-like understanding.
So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.
Things were different during the pandemic because there was a real risk of vastly overwhelming the healthcare system. Airborne diseases never went away, but the difference is that if you catch one now, you can get treated - which wasn’t always the case back then. That's why we urged people to wear masks and get vaccinated; that we all wouldn't get sick at the same time.
I think a huge issue online is just how incredibly mean people can be to each other - and the fact that they don’t even see themselves as mean. They’ve built a story around how their behavior is justified, so they keep doing it, completely oblivious to the fact that they’re part of the problem.
A linear regression model isn’t an AI system.
The term AI didn’t lose its value - people just realized it doesn’t mean what they thought it meant. When a layperson hears “AI,” they usually think AGI, but while AGI is a type of AI, it’s not synonymous with the term.
I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.
An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.
What does history have to do with it? We’re talking about the definition of terms - and a machine learning system like an LLM clearly falls within the category of Artificial Intelligence. It’s an artificial system capable of performing a cognitive task that’s normally done by humans: generating language.
The chess opponent on Atari is AI too. I think the issue is that when most people hear "intelligence," they immediately think of human-level or general intelligence. But an LLM - while intelligent - is only so in a very narrow sense, just like the chess opponent. One’s intelligence is limited to playing chess, and the other’s to generating natural-sounding language.
AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it's the correct term nevertheless.
Buddhists probably had figured out a lot of things about the workings of the human mind way before science did.