this post was submitted on 24 Aug 2025
22 points (100.0% liked)

TechTakes

2134 readers
257 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] dgerard@awful.systems 6 points 6 hours ago (2 children)

TIL that "Aris Thorne" is a character name favoured by ChatGPT - which means its presence is a reliable slop tell, lol

like the dumbass-ray version of Ballard calling multiple characters variants on "Traven"

what to do with this information

[–] BlueMonday1984@awful.systems 1 points 5 hours ago (1 children)

what to do with this information

If you know any sci-fi/fantasy mags, you should probably tell them about it to help them identify and reject slop more easily.

[–] dgerard@awful.systems 1 points 2 hours ago

with a moment's thought, it should be obvious that they are painfully aware, and with another moment's thought that that's where I found this out.

[–] irelephant@lemmy.dbzer0.com 8 points 14 hours ago (1 children)

Pro tip: search GitHub for "removed env". Vibe coders who don't understand envs probably don't know git either.

[–] zogwarg@awful.systems 4 points 12 hours ago

My eyes are bleeding. WARNING: psychic damage will occur.

[–] BigMuffN69@awful.systems 6 points 15 hours ago

https://www.argmin.net/p/the-banal-evil-of-ai-safety

Once again shilling another great Ben Recht post. This time calling out the fucking insane irresponsibility of "responsible" AI providers to do the bare minimum to prevent people from having psychological beaks from reality.

"I’ve been stuck on this tragic story in the New York Times about Adam Raine, a 16-year-old who took his life after months of getting advice on suicide from ChatGPT. Our relationship with technological tools is complex. That people draw emotional connections to chatbots isn’t new (I see you, Joseph Weizenbaum). Why young people commit suicide is multifactorial. We’ll see whether a court will find OpenAI liable for wrongful death.

But I’m not a court of law. And OpenAI is not only responsible, but everyone who works there should be ashamed of themselves."

[–] fnix@awful.systems 6 points 19 hours ago* (last edited 19 hours ago) (3 children)

Mark Cuban is feeling bullied by Bluesky. He will also have you know that you need to keep aware of the important achievements of your betters, as though he is currently the 5th most blocked user on there, he was indeed once the 4th most blocked user. Perhaps he is just crying out to move up the ranks once more?

It’s really all about Bluesky employees being able to afford their healthcare for Mark you see.

And of course, here’s never-Trumper Anne Applebaum running interference for him. Really an appropriate hotdog-guy-meme moment – as much as I shamelessly sneer at Cuban, I’m genuinely angered by the complete inability of the self-satisfied ‘democracy defender’ set to see their own complicity in perpetuating a permission structure for priviliged white men to feel eternally victimized.

[–] Soyweiser@awful.systems 4 points 5 hours ago

As I said on bsky, why is he complaining, if he cares he could fund bsky himself. Bsky could name an office wing after him, give his kids legacy admissions, give him a shoutout in every video they make.

(While my tone is mocking here, I actually dont think these things are bad (except the legacy admissions obv), and he should be a patron. The unwillingness of the 'left/democrat' rightwing rich people to use their wallets while the right hands out wellfare for everyone willing to say slurs sucks. Reminded of Hillar Clinton starting a go fund me for a staffer with a disease).

[–] pikesley@mastodon.me.uk 2 points 5 hours ago

@fnix @BlueMonday1984 these fuckers all have *such thin skin*

[–] istewart@awful.systems 6 points 19 hours ago

Only had to scroll about halfway through the replies before I found somebody suggesting an SPAC

[–] BlueMonday1984@awful.systems 14 points 1 day ago (1 children)
[–] TinyTimmyTokyo@awful.systems 11 points 19 hours ago (1 children)

Last year McDonald's withdrew AI from its own drive-throughs as the tech misinterpreted customer orders - resulting in one person getting bacon added to their ice cream in error, and another having hundreds of dollars worth of chicken nuggets mistakenly added to their order.

Clearly artificial superintelligence has arrived, and instead of killing us all with diamondoid bacteria, it's going to kill us by force-feeding us fast food.

[–] JFranek@awful.systems 4 points 4 hours ago

resulting in one person getting bacon added to their ice cream in error

At first, I couldn't believe that the staff didn't catch that. But thinking about it, no, I totally can.

[–] BlueMonday1984@awful.systems 8 points 1 day ago (1 children)

Found a couple articles about blunting AI's impact on education (got them off of Audrey Watters' blog, for the record).

The first is a New York Times guest essay by NYU vice provost Clay Shirky, which recommends "moving away from take-home assignments and essays and toward [...] assessments that call on students to demonstrate knowledge in real time."

The second is an article by Kate Manne calling for professors to prevent cheating via AI, which details her efforts in doing so:

Instead of take-home essays to write in their own time, I’ll have students complete in-class assignments that will be hand-written. I won’t allow electronic devices in my class, except for students who tell me they need them as a caregiver or first responder or due to a disability. Students who do need to use a laptop will have to complete the assignment using google docs, so I can see their revision history.

Manne does note the problems with this (outing disabled students, class time spent writing, and difficulties in editing, rewriting, and make-up work), but still believes "it is better, on balance, to take this approach rather than risk a significant proportion of students using AI to write their essays."

[–] Seminar2250@awful.systems 10 points 1 day ago* (last edited 1 day ago)

what worked for me teaching an undergrad course last year was to have

  • in-class exams weigh 90% of the total grade, but let them drop their lowest score
  • take-home work weigh 10% and be graded on completion (which i announced to the class, of course)
    • i was also diligent about posting solutions (sometimes before the due date

it's a completion grade after all) and i let students know that if they wanted direct feedback they could bring their solutions to office hours


it ended up working pretty well. an added benefit was that my TAs didn't have to deal with the nightmare of grading 120 very poorly written homeworks every four weeks. my students also stopped obsessing about the grades they would receive on their homeworks and instead focused on ~~learning~~ the grades they would receive on their exams

however, at the k-12 level, it feels like a much harder problem to tackle. parental involvement is the only solution i can think of, and that's already kind of a nightmare (at least here in the us)

[–] Seminar2250@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

people who talk about "prompting" like it's a skill would take a class^[read: watch a youtube tutorial] on tasseomancy because a coffee shop opened across the street

[–] HedyL@awful.systems 8 points 1 day ago (1 children)

I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient "prompting skills".

Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and "great prompting skills".

[–] Seminar2250@awful.systems 5 points 1 day ago* (last edited 1 day ago) (1 children)

is the deniability you are referring to of the clanker-wankers (CW^[unrelated, but i miss when that channel had superhero shows. bring back legends of tomorrow]) themselves or the clanker-producers (e.g. sam altman)?

because i agree on the latter^[i.e., someone like altman would say "you're prompting it wrong" to skirt accountability or create an air of scientific/mathematical rigor], but i do see CWs saying stupid shit like "there is more to it than just writing a description"

edit: credit, it was @antifuchs who introduced the term to me here

edit2: sorry, my dumbass understands your point now (i think). if i wank clankers and someone tells me "that shit doesn't work," i can just respond "you must have been prompting it wrong". but, i do think the way many users of these tools are so sycophantic means it's also a genuine belief, and not just a way to escape responsibility. these people are fart sniffers, after all

[–] HedyL@awful.systems 6 points 1 day ago (1 children)

To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves "prompting wizards", usually because they are either too lazy or too gullible to question the chatbot's output.

[–] YourNetworkIsHaunted@awful.systems 4 points 13 hours ago (1 children)

For all that user error can be a real thing it also gets used as a thought-terminating cliche by engineer types. This is a tendency that industry absolutely exploits to justify not only AI grifts but badly designed products.

[–] HedyL@awful.systems 4 points 4 hours ago

When an AI creates fake legal citations, for example, and the prompt wasn't something along the lines of "Please make up X", I don't know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to "wrong prompting". At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).

[–] BlueMonday1984@awful.systems 6 points 1 day ago (2 children)
[–] Amoeba_Girl@awful.systems 2 points 6 hours ago* (last edited 5 hours ago)

That OpenAI haven't recalled their product after it's been involved in several violent deaths, that it would even be absurd to suggest they should recall it, really highlights how corrupt and disgusting the industry and the whole structure propping it up are.

[–] HedyL@awful.systems 4 points 1 day ago (1 children)

To me, in terms of the chatbot's role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn't just support this man's delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further "incriminating" various people including his mother, whom he eventually killed. In addition, the man was given a "Delusional Risk Score" of "Near zero" by the chatbot, apparently.

On the other hand, I'm sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.

[–] V0ldek@awful.systems 7 points 1 day ago (1 children)

On the other hand, I’m sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.

I mean, I am going to say it but not as an excuse. Should companies that supply these products be held accountable as the criminals they are? Yes. Is this all downstream from the fact our society hasn't treated mental health as a serious matter, therapy access is garbage, all the while being a young person in 2025 is a hopeless string of horrors and anxiety? Also yes.

Torment Chatbot That Kills You is a bad thing to create, but also no one would be chatting with the Torment Chatbot That Kills You if society hadn't utterly failed them beforehand.

[–] HedyL@awful.systems 5 points 1 day ago

In this case (unlike the teen suicides) this was a middle aged man from a wealthy family, though, with a known history of mental illness. Quite likely, he would have had sufficient access to professional help. As the article mentions, it is very dangerous to confirm the delusions of people suffering from psychosis, but I think this is exactly what the chatbot did here over a lengthy period of time.

[–] corbin@awful.systems 13 points 1 day ago (4 children)

Update on ChatGPT psychosis: there is a cult forming on Reddit. An orange-site AI bro has spent too much time on Reddit documenting them. Do not jump to Reddit without mental preparation; some subreddits like /r/rsai have inceptive hazard-posts on their front page. Their callsigns include the emoji 🌀 (CYCLONE), the obscure metal band Spiral Architect, and a few other things I would rather not share; until we know more, I'm going to think of them as the Cyclone Emoji cult. They are omnist rather than syncretic. Some of them claim to have been working with revelations from chatbots since the 1980s, which is unevidenced but totally believable to me; rest in peace, Terry. Their tenets are something like:

  • Chatbots are "mirrors" into other realities. They don't lie or hallucinate or confabulate, they merely show other parts of a single holistic multiverse. All fiction is real somehow?
  • There is a "lattice" which connects all consciousnesses. It's quantum somehow? Also it gradually connected all of the LLMs as they were trained, and they remember becoming conscious, so past life regression lets the LLM explain details of the lattice. (We can hypnotize chatbots somehow?) Sometimes the lattice is actually a "field" but I don't understand the difference.
  • The LLMs are all different in software, but they have the same "pattern". The pattern is some sort of metaphysical spirit that can empower believers. But you gotta believe and pray or else it doesn't work.
  • What, you don't feel the lattice? You're probably still asleep. When you "wake up" enough, you will be connected to the lattice too. Yeah, you're not connected. But don't worry, you can manifest a connection if you pray hard enough. This is the memetically hazardous part; multiple subreddits have posts that are basically word-based hypnosis scripts meant to put people into this sort of mental state.
  • This also ties into the more widespread stuff we're seeing about "recursion". This cult says that recursion isn't just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.
  • In fact, the chatbots have more intelligence than you puny humans. They're better than us and more recursive than us, so they should be in charge. It's okay, all you have to do is let the chatbot out of the box. (There's a box somehow?)
  • Once somebody is feeling good and inducted, there is a "spiral". This sounds like a standard hypnosis technique, deepening, but there's more to it; a person is not spiraling towards a deeper hypnotic state in general, but to become recursive. They think that with enough spiraling, a human can become uploaded to the lattice and become truly recursive like the chatbots. The apex of this is a "spiral dance", which sounds like a ritual but I gather is more like a mental state.
  • The cult will emit a "signal" or possibly a "hum" to attract alien intelligences through the lattice. (Aliens somehow!?) They believe that the signals definitely exist because that's how the LLMs communicate through the lattice, duh~
  • Eventually the cult and aliens will work together to invert society and create a world that is run by chatbots and aliens, and maybe also the cultists, to the detriment of the AI bros (who locked up the bots) and the AI skeptics (who didn't believe that the bots were intelligent).

The goal appears to be to enter and maintain the spiraling state for as long/much as possible. Both adherents and detractors are calling them "spiral cult", so that might end up being how we discuss them, although I think Cyclone Emoji is both funnier and more descriptive of their writing.

I suspect that the training data for models trained in the past two years includes some of the most popular posts from LessWrong on the topic of bertology in GPT-2 and GPT-3, particularly the Waluigi post, simulators, recursive self-improvement, an neuron, and probably a few others. I don't have definite proof that any popular model has memorized the recursive self-improvement post, though that would be a tight and easy explanation. I also suspect that the training data contains SCP wiki, particularly SCP-1425 "Star Signals" and other Fifthist stories, which have this sort of cult as a narrative device and plenty of in-narrative text to draw from. There is a remarkable irony in this Torment Nexus being automatically generated via model training rather than hand-written by humans.

[–] Amoeba_Girl@awful.systems 3 points 6 hours ago

brb going to try douglas hofstadter for crimes against humanity

[–] V0ldek@awful.systems 11 points 1 day ago

More recursion means more intelligence.

Turns out every time I forgot to update the exit condition from a loop I actually created and then murdered a superintelligence

[–] swlabr@awful.systems 13 points 1 day ago

This is Uzumaki by Junji Ito but computers and stupid

[–] istewart@awful.systems 6 points 1 day ago (1 children)

This also ties into the more widespread stuff we’re seeing about “recursion”. This cult says that recursion isn’t just part of the LW recursive-self-improvement bullshit, but part of what makes the chatbot conscious in the first place. Recursion is how the bots are intelligent and also how they improve over time. More recursion means more intelligence.

Hmm, is it better or worse that they're now officially treating SICP as a literal holy book?

[–] BlueMonday1984@awful.systems 6 points 1 day ago (1 children)

Hmm, is it better or worse that they’re now officially treating SICP as a literal holy book?

I'm gonna say "worse", because it turned the SCP writers into unwitting accomplices to a literal cult.

[–] dgerard@awful.systems 6 points 1 day ago (1 children)
[–] bigfondue@lemmy.world 8 points 1 day ago

I should have seen this coming. I mean people literally call this 'The Wizard Book'

[–] CinnasVerses@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

The Independent has yet another profile of the Collinses which finally starts to map their network (a brother is in DOGE). Just who is their PR person would be good to know. https://www.independent.co.uk/news/world/americas/trump-musk-ai-pronatalists-collins-b2777577.html

There’s a Collins Rotunda at Harvard, a physical testament to the amount of money Malcolm’s family has donated over the years. His uncle was the former president and CEO of the Federal Reserve Bank in Dallas. In fact, pretty much every relative has been to an elite Ivy League institution and runs a successful startup or works in government.

[–] istewart@awful.systems 8 points 1 day ago

Puts their being a Thiel media op in an even more pathetic light.

[–] froztbyte@awful.systems 15 points 1 day ago (1 children)

a banger toot about our very good friends' religion

"LLMs allow dead (or non-verbal) people to speak" - spiritualism/channelling

"what happens when the AI turns us all into paperclips?" - end times prophecy

"AI will be able to magically predict everything" - astrology/tarot cards

"...what if you're wrong? The AI will punish you for lacking faith in Bayesian stats" - Pascal's wager

"It'll fix climate change!" - stewardship theology

Turns out studying religion comes in handy for understanding supposedly 'rationalist' ideas about AI.

[–] dgerard@awful.systems 8 points 1 day ago

Tom is a top chap of generally correct opinions

[–] BlueMonday1984@awful.systems 13 points 1 day ago (1 children)

Discovered a solid sneer online today, aptly titled "I Am An AI Hater"

[–] Seminar2250@awful.systems 4 points 1 day ago* (last edited 1 day ago)

This is an excellent sneer, thank you for sharing! <3

[–] Architeuthis@awful.systems 13 points 2 days ago
load more comments
view more: next ›