this post was submitted on 06 Aug 2023
265 points (92.6% liked)

Technology

63134 readers
3423 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

top 50 comments
sorted by: hot top controversial new old
[–] Lemmylefty@lemmy.world 47 points 2 years ago (4 children)

[…] in blog posts and videos and published memoirs, autistic teens and young adults described living for a decade or more without any way to communicate, while people around them assumed they were intellectually deficient.

On a related note… only 5% of hearing parents with a deaf child will learn sign language.

[–] 1stTime4MeInMCU@mander.xyz 26 points 2 years ago (2 children)

That’s awful. I don’t know why sign language isn’t made into an official state language that everyone has to learn some basic amount of proficiency

[–] littlewonder@lemmy.world 11 points 2 years ago (2 children)

Amen! And it would benefit literally everybody. You can communicate across a room or in loud environments. It's so useful!

[–] Ser_Salty@feddit.de 7 points 2 years ago (1 children)

You could talk about blind people without them knowing

[–] 14th_cylon@lemm.ee 1 points 2 years ago

i bet these bastards would somehow learn to interpret the changes in air pressure you'd create when signing... that's how you create supervillain.

[–] WarmSoda@lemm.ee 4 points 2 years ago (1 children)

But then other people could listen to what we're saying!

[–] superminerJG@lemmy.world 9 points 2 years ago

This gets me wondering: In sign languages, are there different words for "hearing" (i.e. looking someone sign to you) vs "seeing" (i.e. looking at something that isn't signing?

[–] captainlezbian@lemmy.world 4 points 2 years ago

Because we only recently stopped telling parents not to teach it to hard of hearing children.

[–] SuddenDownpour@lemmy.world 7 points 2 years ago

...That's disgusting.

load more comments (2 replies)
[–] Spzi@lemm.ee 17 points 2 years ago (2 children)

Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”

I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza's worldview, with a few errors.

I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.

The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.

We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn't even mention emergent capabilities, or quote any of the many contrary scientific views.

Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it's an uncritical parroting of what people like to think anyways to feel supreme and secure.

[–] kaffiene@lemmy.world 11 points 2 years ago (3 children)

LLMs are definitely not intelligent. If you understand how they work, you'll realise why that is. LLMs reflect the intelligence in the work which they are trained on. No more, no less.

[–] SlopppyEngineer@lemmy.world 8 points 2 years ago

That's especially fun when you ask the same question in two different languages and get different results or even just gibberish in the other, usually non-English language. It clearly has more training data in English than it does for some other languages.

[–] Spzi@lemm.ee 5 points 2 years ago

That very much depends on what you define as "intelligent". We lack a clear definition.

I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.

And still, we can already have more intelligent conversations with them than with most humans.

It's not a fair comparison though. It's as if we'd compare the language region of a toddler with a complete brain of an adult. Let's see what the next few years bring.

I'm not making that point, just mentioning it can be made on an academic level: There's a paper about the surprising emergent capabilities of ChatGPT 4.0, titled "Sparks of AGI".

[–] SkepticalButOpenMinded@lemmy.ca 3 points 2 years ago

That might seem plausible until you read deeply into the latest cognitive science. Nowadays, the growing consensus is around “predictive coding” theory of cognition, and the idea is that human cognition also works by minimizing prediction error. We have models in our brains that reflect input that we’ve been trained on. I think anyone who understands human cognition and LLMs cannot confidently say that LLMs are or are not intelligent yet.

[–] qyron@lemmy.pt 6 points 2 years ago

I've read a few texts from the same source and they read quite childish.

It felt like reading essays from very young children: there is some degree of coherence, some information is there but it lacks actual advancement on the subject.

[–] Hextic@lemmy.world 17 points 2 years ago

The ability to speak does not make you intelligent.

[–] unreachable@lemmy.my.id 13 points 2 years ago (1 children)
[–] PipedLinkBot@feddit.rocks 4 points 2 years ago

Here is an alternative Piped link(s): https://piped.video/watch?v=TUq6rGdfJSo

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[–] Thorny_Thicket@sopuli.xyz 12 points 2 years ago (2 children)

Ability to speak seems like an obvious sign on some kind of intelligence and complexity but I don't remember anyone ever arguing that inability to speak means lack of intelligence. We know a plenty of intelligent species that lack the ability to speak a language as complex as humans can but we don't consider them unintelligent because of that.

"LLMs are not intelligent at all"

Sucks to lose your job to a potenttially more competent AI then that lacks any intelligence.

[–] FlyingSquid@lemmy.world 5 points 2 years ago (1 children)

Did you say "sucks to lose your job to a manufacturing robot that lacks any intelligence" to the countless people in manufacturing jobs left destitute by robotics starting in the 1960s?

[–] DistractedDev@lemm.ee 4 points 2 years ago

I did. Change is hard but it's hard to argue that we would be better off without that robotics. It does a lot of work for us and we can all have better stuff because of it. In a better world we'd be excited because it means we all have to do less work but the upper class just keeps finding more stuff for us to do.

[–] NeoNachtwaechter@lemmy.world 2 points 2 years ago

but I don't remember anyone ever arguing that inability to speak means lack of intelligence.

You don't learn heuristics by means of an argument.

[–] Qxzkjp@feddit.uk 10 points 2 years ago (2 children)

Which about five minutes ago was precisely what the term ‘artificial intelligence’ meant, but since tech companies managed to dumb down and rebrand ‘AI’ to mean “anything utilizing a machine-learning algorithm”,

Oh look, the universal signal for "please ignore me, I am a simpleton".

I had no idea that all these years (up until "five minutes ago") I had been playing video games against human-level artificial intelligences 🙄

I'd respond to the rest of it, but there's no point, she doesn't have the first clue what she's talking about. To quote Shakespeare: "it is a tale told by an idiot, full of sound and fury, signifying nothing."

load more comments (2 replies)
[–] Peanutbjelly@sopuli.xyz 10 points 2 years ago (4 children)

This whole thread is absurd.

Chatgpt has a form of intelligence depending on your definition of intelligence. It may also be considered conscious in a very alien and undeveloped way. It is definitely not sentient.

Kind of like having the stochastic word generating part of a brain and nothing else.

You can still shape it into something capable of intelligent and directed activity.

People are really bad at accepting the level of nuance necessary for this topic.

It is useful and fantastic for what it already is. People are just really bad at understanding what it is.

[–] FaceDeer@kbin.social 9 points 2 years ago (3 children)

A lot of people are deeply invested in the notion that human intelligence is unique and special and impossible to replicate. Either their personal sense of worth is bound up in that notion (see for example many of the artists who get very angry when people call AI generated images "art") or it's simply a threat to their jobs and economic wellbeing. The result is a powerful need to convince themselves that there's a special something that's missing from ChatGPT and its ilk that will "never" be replicated by machines.

It's true that ChatGPT isn't intelligent in the same way that human brains are intelligent. But it is intelligent, in ways that are useful. And "never" is a bad bet to make for the rest of those capabilities.

[–] kaffiene@lemmy.world 1 points 2 years ago (1 children)

Chatgpt is not intelligent. Not in the sense where we use that word anywhere else, including the animal kingdom. Transformer is an extraordinary clever and sophisticated algorithm, thou

[–] FaceDeer@kbin.social 5 points 2 years ago

As I said:

It's true that ChatGPT isn't intelligent in the same way that human brains are intelligent.

There isn't just one kind of intelligence.

load more comments (2 replies)
load more comments (3 replies)
[–] Renacles@discuss.tchncs.de 9 points 2 years ago

New technology comes out and all people seem interested in is bashing it instead of figuring it how to use it to make our lives easier.

Most of it comes from the way our society is structured to require everyone to have a job or starve pretty much, but if AI is making so many jobs obsolete shouldn't we be trying to change that instead of pretending AI won't keep advancing?

[–] fidodo@lemm.ee 6 points 2 years ago (2 children)

I view it by building up to the technology.

Is a book sentient? It is capable of providing recorded knowledge in the form of sequence of symbols on a specific subject at a level of proficiency far above the reader's. But no, it's static information that originated from a human.

Is a library sentient? It allows for systematic retrieval of knowledge on a vast amount of subjects far beyond what any human is capable of knowing. But no, it's just a static categorization of documents curated by a human.

Is a search engine sentient? It allows for automatic retrieval of highly relevant knowledge based on a query from a human. But no, it's just token based pattern matching to find similar documents.

So why is an LLM suddenly sentient? It's able to produce highly relevant sequences of words based on recorded knowledge specifically tailored to the sequences of words around it, but it's just a probability engine to find highly relevant token sequences that match the context around it.

The underlying mechanism simply has no concept of a world view or a mental model of the metaphysical world around. It's basically a magic book that allows you to retrieve information from any document ever written in a way tailored to a document you wrote.

[–] uroybd@lemmy.world 5 points 2 years ago (2 children)

Yes. LLMs generate texts. They don't use language. Using a language requires an understanding of the subject one is going to express. LLMs don't understand.

[–] Spzi@lemm.ee 2 points 2 years ago (1 children)

I guess you're right, but find this a very interesting point nevertheless.

How can we tell? How can we tell that we use and understand language? How would that be different from an arbitrarily sophisticated text generator?

For the sake of the comparison, we should talk about the presumed intelligence of other people, not our ("my") own.

load more comments (1 replies)
[–] kaffiene@lemmy.world 0 points 2 years ago

This gets to the core of the issue. LLMs are a model of the statiscal relationship between words in texts, in a very large number of dimensions. The intelligence they appear to exhibit is that which existed in their source material in the first place. They don't have a model of the world itself. If you consider how midjourney can produce photorealstic images of people yet very often it will get hands wrong. How is that? It's because when you train on images, you get a statistical representation of what hands look like without the world model that let's you know that hands only have 5 fingers and how they're arranged. AIs like this are very clever copiers. They are not intelligent

[–] Spzi@lemm.ee 4 points 2 years ago (3 children)

I can mostly follow, just want to exclude the last paragraph which contains assumptions about a black box.

That being said, how is the human brain different from what you describe?

load more comments (3 replies)
[–] housepanther@lemmy.goblackcat.com 4 points 2 years ago* (last edited 2 years ago)

Language makes a poor heuristic for intelligence because language is multidimensional and cannot be easily quantified or qualified. Language is expressed in so many varied ways. It is even different between cultures and expressions of language even vary as well. Indeed the article you're referring to is quite good.

[–] SouthFresh@lemmy.ml 4 points 2 years ago

The real indicator is Language + Puns.

[–] Lorenz_These_Curves@lemmy.world 2 points 2 years ago

I'm shocked, I thought it was sentient!

[–] MaxVoltage@lemmy.world 1 points 2 years ago (1 children)

indeed whales cant speak and are very smart 🧠 /s

[–] NeoNachtwaechter@lemmy.world 1 points 2 years ago

Wait until you learn the language of deer!

[–] prototyperspective@lemmy.world 1 points 2 years ago

Here are some correlations of language skills and other intelligence factors or evaluations (e.g. IQ) via a study (recently integrated the info into the article: Neurogenetics – language GWAS

However, I largely agree – see for example this argument / its sources

load more comments
view more: next ›