Time saved by AI offset by new work created, study suggests -- Ars Technica
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Horrible "rubberhosing" of cryptocurrency people continues. Guardian article, content warning, bit extremer than a rubber hose.
Here’s a pretty good sneer at the writing out of LLMs, with a focus on meaning https://www.experimental-history.com/p/28-slightly-rude-notes-on-writing
Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters.
Also, on people:
I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.
Found on the sneer club legacy version -
ChatGPT 4o will straight up tell you you're God.
Also I find this quote interesting (emphasis mine:
He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.” “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT.
I would absolutely believe that this is the case, especially if like Sem you have a sufficiently uncommon name that the model doesn't have a lot of context and connections to hang on it to begin with.