The LLM peddlers seem to be going for that exact result. That's why they're calling it "AI". Why is this surprising that non-technical people are falling for it?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it's their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?
The debate on consciousness is one we should be having, even if LLMs themselves aren't really there. If you're new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it's about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren't but we can't tell that, that's an alignment problem. Everything's fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there's not any good solutions.
But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.
Not the fault of prior generations, either. They were raised by their parents, and them by their parents, and so on.
Sometime way back there was a primordial multicellular life form that should have known better.
That's a bit of a reach. We should have stayed in the trees though, but the trees started disappearing and we had to change.
They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past "reboot it".
How they don't understand that a LLM can't be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.
I wasn't aware the generation of CEOs and politicians was called "Gen Z".
We have to make the biggest return on our investments, fr fr
The article targets its study on Gen Z but.... yeah, the elderly aren't exactly winners here, either.
That's a matter of philosophy and what a person even understands "consciousness" to be. You shouldn't be surprised that others come to different conclusions about the nature of being and what it means to be conscious.
Consciousness is an emergent property, generally self awareness and singularity are key defining features.
There is no secret sauce to llms that would make them any more conscious than Wikipedia.
Consciousness comes from the soul, and souls are given to us by the gods. That's why AI isn't conscious.
If it was actually AI sure.
This is an unthinking machine algorithm chewing through mounds of stolen data.
That is certainly one way to view it. One might say the same about human brains, though.
To be fair, so am i
Are we really going to devil's advocate for the idea that avoiding society and asking a language model for life advice is okay?
No, but thinking about whether it's conscious is an independent thing.
It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.
And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.
The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we'd know we had it 😉
What do you mean? I don't follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?
It's an answer on if one is sure if they are not just a fancy autocomplete.
More directly; we can't be sure if we are not some autocomplete program in a fancy computer but since we're having an experience then we are conscious programs.
When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.
And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.
The batshit insane part of that is they could just make easy canned answers for thank yous, but nope...IT'S THE USER'S FAULT!
edit: To the mass downvoting prick who is too cowardly to comment, whats it like to be best friends with a calculator?
One would think if they're as fucking smart as they believe they are they could figger a way around it, eh??? 🤣
Its a friend the way the nice waitress is a friend when you go eat out.
Those lovable little simpletons.
Are we positive that they're conscious? I just think we should run some tests.
Whatever, couldn't it also be that a technical consciousness will look rather different from what we assume? There are obviously less/none of some factors, ie emotional intelligence etc. But a tech super intelligence, if ever reached, may have a number of unexpected problems for us. We should concentrate on unexpected outcomes and establish safeguards.
The world is going to be absolutely fucked when the older engineers and techies who built all this modern shit and/or maintain it and still understand it all retire or die off.
Not sure what's alarming about that. It's a bit early to worry about an AI Dred Scott, no?
It's alarming people are so gullible that a glorified autocorrect can fool them into thinking it's sapient
"how dare you insult my robot waifu?!"
Is it still passing the Turing test if you don't think either one is human?