this post was submitted on 22 Apr 2025
260 points (94.8% liked)

Technology

69449 readers
3712 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 49 comments
sorted by: hot top controversial new old
[–] salacious_coaster@infosec.pub 80 points 1 week ago (29 children)

The LLM peddlers seem to be going for that exact result. That's why they're calling it "AI". Why is this surprising that non-technical people are falling for it?

load more comments (29 replies)
[–] Rhaedas@fedia.io 39 points 1 week ago (2 children)

Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it's their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?

The debate on consciousness is one we should be having, even if LLMs themselves aren't really there. If you're new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it's about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren't but we can't tell that, that's an alignment problem. Everything's fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there's not any good solutions.

But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.

[–] AmidFuror@fedia.io 9 points 1 week ago (6 children)

Not the fault of prior generations, either. They were raised by their parents, and them by their parents, and so on.

Sometime way back there was a primordial multicellular life form that should have known better.

[–] Rhaedas@fedia.io 4 points 1 week ago

That's a bit of a reach. We should have stayed in the trees though, but the trees started disappearing and we had to change.

load more comments (5 replies)
load more comments (1 replies)
[–] Death_Equity@lemmy.world 24 points 1 week ago (3 children)

They also are the dumbest generation with a COVID education handicap and the least technological literacy in terms of mechanics comprehension. They have grown up with technology that is refined enough to not need to learn troubleshooting skills past "reboot it".

How they don't understand that a LLM can't be conscious is not surprising. LLMs are a neat trick, but far from anything close to consciousness or intelligence.

load more comments (3 replies)
[–] taladar@sh.itjust.works 22 points 1 week ago (2 children)

I wasn't aware the generation of CEOs and politicians was called "Gen Z".

We have to make the biggest return on our investments, fr fr

The article targets its study on Gen Z but.... yeah, the elderly aren't exactly winners here, either.

[–] wagesj45@fedia.io 16 points 1 week ago (3 children)

That's a matter of philosophy and what a person even understands "consciousness" to be. You shouldn't be surprised that others come to different conclusions about the nature of being and what it means to be conscious.

[–] 0x01@lemmy.ml 12 points 1 week ago (4 children)

Consciousness is an emergent property, generally self awareness and singularity are key defining features.

There is no secret sauce to llms that would make them any more conscious than Wikipedia.

[–] Muaddib@sopuli.xyz -3 points 6 days ago (3 children)

Consciousness comes from the soul, and souls are given to us by the gods. That's why AI isn't conscious.

load more comments (3 replies)
load more comments (3 replies)
[–] Sixtyforce@sh.itjust.works 4 points 1 week ago (3 children)

If it was actually AI sure.

This is an unthinking machine algorithm chewing through mounds of stolen data.

[–] wagesj45@fedia.io 9 points 1 week ago

That is certainly one way to view it. One might say the same about human brains, though.

[–] surewhynotlem@lemmy.world 8 points 1 week ago

To be fair, so am i

load more comments (1 replies)
[–] Vanilla_PuddinFudge@infosec.pub -2 points 1 week ago* (last edited 1 week ago) (2 children)

Are we really going to devil's advocate for the idea that avoiding society and asking a language model for life advice is okay?

[–] Aatube@kbin.melroy.org 12 points 1 week ago (1 children)

No, but thinking about whether it's conscious is an independent thing.

[–] thiseggowaffles@lemmy.zip 7 points 1 week ago (1 children)

It's not devil's advocate. They're correct. It's purely in the realm of philosophy right now. If we can't define "consciousness" (spoiler alert: we can't), then it makes it impossible to determine with certainty one way or another. Are you sure that you yourself are not just fancy auto-complete? We're dealing with shit like the hard problem of consciousness and free will vs determinism. Philosophers have been debating these issues for millennia and were not much closer to a consensus yet than we were before.

And honestly, if the CIA's papers on The Gateway Analysis from Project Stargate about consciousness are even remotely correct, we can't rule it out. It would mean consciousness preceeds matter, and support panpsychism. That would almost certainly include things like artificial intelligence. In fact, then the question becomes if it's even "artificial" to begin with if consciousness is indeed a field that pervades the multiverse. We could very well be tapping into something we don't fully understand.

[–] tabular@lemmy.world -2 points 1 week ago (1 children)

The only thing one can be 100% certain of is that one is having an experience. If we were a fancy autocomplete then we'd know we had it 😉

[–] thiseggowaffles@lemmy.zip 6 points 1 week ago (1 children)

What do you mean? I don't follow how the two are related. What does being fancy auto-complete have anything to do with having an experience?

[–] tabular@lemmy.world 0 points 1 week ago (1 children)

It's an answer on if one is sure if they are not just a fancy autocomplete.

More directly; we can't be sure if we are not some autocomplete program in a fancy computer but since we're having an experience then we are conscious programs.

[–] thiseggowaffles@lemmy.zip 7 points 1 week ago* (last edited 1 week ago)

When I say "how can you be sure you're not fancy auto-complete", I'm not talking about being an LLM or even simulation hypothesis. I'm saying that the way that LLMs are structured for their neural networks is functionally similar to our own nervous system (with some changes made specifically for transformer models to make them less susceptible to prompt injection attacks). What I mean is that how do you know that the weights in your own nervous system aren't causing any given stimuli to always produce a specific response based on the most weighted pathways in your own nervous system? That's how auto-complete works. It's just predicting the most statistically probable responses based on the input after being filtered through the neural network. In our case it's sensory data instead of a text prompt, but the mechanics remain the same.

And how do we know whether or not the LLM is having an experience or not? Again, this is the "hard problem of consciousness". There's no way to quantify consciousness, and it's only ever experienced subjectively. We don't know the mechanics of how consciousness fundamentally works (or at least, if we do, it's likely still classified). Basically what I'm saying is that this is a new field and it's still the wild west. Most of these LLMs are still black boxes that we only barely are starting to understand how they work, just like we barely are starting to understand our own neurology and consciousness.

[–] lupusblackfur@lemmy.world 8 points 1 week ago (1 children)
[–] Vanilla_PuddinFudge@infosec.pub 3 points 1 week ago* (last edited 1 week ago) (1 children)

The batshit insane part of that is they could just make easy canned answers for thank yous, but nope...IT'S THE USER'S FAULT!

edit: To the mass downvoting prick who is too cowardly to comment, whats it like to be best friends with a calculator?

[–] lupusblackfur@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

One would think if they're as fucking smart as they believe they are they could figger a way around it, eh??? 🤣

[–] HubertManne@piefed.social 5 points 1 week ago

Its a friend the way the nice waitress is a friend when you go eat out.

[–] Xanza@lemm.ee 4 points 1 week ago (2 children)

Those lovable little simpletons.

[–] BreadstickNinja@lemmy.world 0 points 1 week ago (2 children)

Are we positive that they're conscious? I just think we should run some tests.

load more comments (2 replies)
load more comments (1 replies)
[–] J52@lemmy.nz 3 points 1 week ago

Whatever, couldn't it also be that a technical consciousness will look rather different from what we assume? There are obviously less/none of some factors, ie emotional intelligence etc. But a tech super intelligence, if ever reached, may have a number of unexpected problems for us. We should concentrate on unexpected outcomes and establish safeguards.

[–] TimeSquirrel@kbin.melroy.org 1 points 1 week ago (3 children)

The world is going to be absolutely fucked when the older engineers and techies who built all this modern shit and/or maintain it and still understand it all retire or die off.

load more comments (3 replies)
[–] General_Effort@lemmy.world -3 points 1 week ago (1 children)

Not sure what's alarming about that. It's a bit early to worry about an AI Dred Scott, no?

[–] Bronzebeard@lemm.ee 8 points 1 week ago (2 children)

It's alarming people are so gullible that a glorified autocorrect can fool them into thinking it's sapient

"how dare you insult my robot waifu?!"

[–] orclev@lemmy.world 0 points 1 week ago

Is it still passing the Turing test if you don't think either one is human?

load more comments
view more: ‹ prev next ›