this post was submitted on 05 Aug 2025
101 points (94.7% liked)
Fuck AI
3642 readers
789 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No boss is going to be persuaded by "cooking the planet". Nor do they care about critical thinking rot.
But the hallucinations? That they'll care about.
Pick something non work-related that your boss is an expert in. Engage the AI in that something until it generates a whole bunch of hallucinations. (My favourite thing is to have an AI hallucinate bands that don't exist, albums that don't exist, songs that don't exist, lyrics that don't exist, etc.: All of which is trivial to verify and prove wrong.)
Here. I just generated this conversation in Deepseek for you to show you how easy it is to get an AI to hallucinate. I just asked a question about an almost completely non-existent concept ("inukpunk") and got it bloviating a bunch of idiocy before catching it with the fact that what it claims is a thriving literary movement doesn't actually exist.
Note: I just asked it a three-word question and it created from whole cloth a breathtaking amount of text on a subject that doesn't exist.
That's because "hallucinating" isn't a bug, it's the core feature of llms. That tech bros have figured out how kludge on a way to get it to sometimes recite accessible data doesn't change the fact that the central purpose for these algorithms is to manufacture text from nothing (well, technically from random noise). The "hallucination" is the failure of the tech bros to hide that function.
It's not an add-on feature. The LLM produces something with the best score it can. Things that increase the score:
So that includes:
If it has no relevant facts then it will maximise the others to get a good score. Hence you get confidently wrong statements because sounding like it knows what it's talking about scores higher than actually giving correct information.
This process is inherent to machine learning at its current level though. It's like a "fake it until you make it" person, who will never admit they're wrong.
Ngl, inukpunk sounds pretty badass and I'd be thrilled if this was less hallucination and more inadvertent prescience.
Your point still stands though.
Oh, I really want inukpunk to become a real thing. Specifically I use it mentally to describe Tanya Tagaq's music and attitude, but as a literary form it would kick ass on ice too.