(e: wrong. damn. tab.)
(e2)
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux's (Jordan Lasker's) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker's X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.
Sounds just about par for the course. Lasker himself is known to go by a pseudonym with a transphobic slur in it. Some nazi manchild insisting on calling an anime character a slur for attention is exactly the kind of person I think of when I imagine the type of script kiddie who thinks it's so fucking cool to scrape some nothingburger docs of a left wing politician for his almost equally cringe nazi friends.
Lasker himself is known to go by a pseudonym with a transphobic slur in it.
That the TPO moniker is basically ungoogleable appears to have been a happy accident for him, according to that article by Rachel Adjogah his early posting history paints him as an honest-to-god chaser.
I feel like the greatest harm that the NYT does with these stories is not ~~inflicting~~ allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger "affirmative action" angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.
This incredible banger of a bug against whisper, the OpenAI speech to text engine:
Complete silence is always hallucinated as "ترجمة نانسي قنقر" in Arabic which translates as "Translation by Nancy Qunqar"
Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.
We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?
No. Instead, it lied. It made up a report than almost all systems were working.
And it did it again and again.
What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter
Then, when it agreed it lied -- it lied AGAIN about our email system being functional.
I asked it to write an apology letter.
It did and in fact sent it to the Replit team and myself! But the apology letter -- was full of half truths, too.
It hid the worst facts in the first apology letter.
He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn't follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company's production database too much.
The guy who thinks it's important to communicate clearly (https://awful.systems/comment/7904956) wants to flip the number order around
https://www.lesswrong.com/posts/KXr8ys8PYppKXgGWj/english-writes-numbers-backwards
I'll consider that when the Yanks abandon middle-endian date formatting.
Edit it's now tagged as "Humor" on LW. Cowards. Own your cranks.
Okay what the fuck, this is completely deranged. How can anyone's intuitions about reading be this wrong? Is he secretly illiterate, did he dictate the article?
"This is not good news about which sort of humans ChatGPT can eat," mused Yudkowsky. "Yes yes, I'm sure the guy was atypically susceptible for a $2 billion fund manager," he continued. "It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them."
Is this "narrative" in the room with us right now?
It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.
From Yud's remarks on Xitter:
As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn't actually true that you can just saunter in as a psychotic IQ 80 person and do that.
Well, not with that attitude.
You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;
If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).
you must outperform other people also trying to do that, who'd like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.
zoom and enhance
g-factor
Tangentially, the other day I thought I'd do a little experiment and had a chat with Meta's chatbot where I roleplayed as someone who's convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I've been meaning to continue the chat and see how far and how fast it goes but I'm just too aghast for now. This shit is so fucking dangerous.
Text conversation that keeps happening with coworker:
Coworker:
Me: what’s the source for that?
Coworker: Oh I got Copilot to summarise these links: , saves me the time of typing
Here’s Dave Barry, still-alive humorist, sneering at Google AI summaries, one of the most embarrassing features Google ever shipped.
Sometimes while browsing a website I catch a glimpse of the cute jackal girl and it makes me smile. Anubis isn't a perfect thing by any means, but it's what the web deserves for its sins.
Even some pretty big name sites seem to use it as-is, down to the mascot. You'd think the software is pretty simple to customize into something more corporate and soulless, but I'm happy to see the animal eared cartoon girl on otherwise quite sterile sites.
So this blog post was framed positively towards LLM's and is too generous in accepting many of the claims around them, but even so, the end conclusions are pretty harsh on practical LLM agents: https://utkarshkanwat.com/writing/betting-against-agents/
Basically, the author has tried extensively, in multiple projects, to make LLM agents work in various useful ways, but in practice:
The dirty secret of every production agent system is that the AI is doing maybe 30% of the work. The other 70% is tool engineering: designing feedback interfaces, managing context efficiently, handling partial failures, and building recovery mechanisms that the AI can actually understand and use.
The author strips down and simplifies and sanitizes everything going into the LLMs and then implements both automated checks and human confirmation on everything they put out. At that point it makes you question what value you are even getting out of the LLM. (The real answer, which the author only indirectly acknowledges, is attracting idiotic VC funding and upper management approval).
Even as critcal as they are, the author doesn't acknowledge a lot of the bigger problems. The API cost is a major expense and design constraint on the LLM agents they have made, but the author doesn't acknowledge the prices are likely to rise dramatically once VC subsidization runs out.
So here's a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, "running the numbers" on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it's not so bad?
No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!
Edit ah it's the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.
Want to feel depressed? Over 2,000 Wikipedia articles, on topics from Morocco to Natalie Portman to Sinn Féin, are corrupted by ChatGPT. And that's just the obvious ones.
New science-related development - The NIH Is Capping Research Proposals Because It's Overwhelmed by AI Submissions
They will need to start banning PIs that abuse the system with AI slop and waste reviewers' time. Just a 1 year ban for the most egregious offenders is probably enough to fix the problem
Honestly I'm surprised that AI slop doesn't already fall into that category, but I guess as a community we're definitionally on the farthest fringes of AI skepticism.
Caught a particularly spectacular AI fuckup in the wild:
(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you've earned it)
Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data
Copilot will be given a little avatar with a "room" and will "age". In other words: we have now reached the Microsoft Bob stage of the AI bubble.
This is literally just a Tamagotchi but worse
EDIT: This was supposed to be an offhanded comment, but reading further makes me think Mustafa Suleyman has literally never heard of a Tamagotchi
New Ed Zitron: The Hater's Guide To The AI Bubble
(guy truly is the Kendrick Lamar of tech, huh)
If you wanted a vision of the future of autocomplete, imagine a computer failing at predicting what you’re gonna write but absolutely burning through kilowatts trying to, forever.
the students in my cs department are overwhelmingly promptfondlers and even my strong students are doing the "qualified praise" thing.
fuck me why did i go into computer science
I tried to see if anyone sells chocolate coins modeled after historical gold coinage and the search engine wanted to be, uh, helpful:
Highlighted portion by Google, not me. Funny how almost everything in the answer is mostly correct, though it's bizarre to explain this to someone searching with these keywords as if I don't already know what florins and chocolate coins are if I'm looking for chocolate florins specifically. The only part blatantly wrong is the highlighted lede!
Alex O'Connor platformed Sabine on his philosophy podcast. I'm irritated that he is turning into Lex Friedman simply by being completely uncritical. Well, no, wait, he was critical of Bell's theorem, and even Sabine had to tell him that Bell's work is mathematically proven. This is what a philosophy degree does to your epistemology, I guess.
My main sneer here is just some links. See, Mary's Room is answered by neuroscience; Mary does experience something new when color vision is restored. In particular, check out the testimonials from this 2021 Oregon experiment that restored color vision to some folks born without it. Focusing on physics, I'd like to introduce you all to Richard Behiel, particularly his explanations of electromagnetism and the Anderson-Higgs mechanism; there are deeper explanations for electricity and magnets, my dude. Also, if you haven't yet, go read Alex's Wikipedia article, linked at the top of the sneer.
In the case of O'Connor and people like him, I think it's about much more than his philosophy background. He's a YouTube creator who creates content on a regular schedule and makes a living off it. Once you start doing that, you're exposed to all the horrible incentives of the YouTube engagement algorithm, which inevitably leads you to start seeking out other controversial YouTubers to platform and become friendly with. It's an "I'll scratch your back if you scratch mine" situation dialed up to 11.
The same thing has happened to Sabine herself. She's been captured by the algorithm, which has naturally shifted her audience to the right, and now she's been fully captured by that new audience.
I fully expect Alex O'Connor to remain on this treadmill. <remind me in 12months>
I found this because Greg Egan shared it elsewhere on fedi:
I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.
Starting this off with a fittingly rage-inducing Twitter thread about an artist getting fucked over by AI
Of course, there are also the usual comments saying artists shouldn't complain about getting replaced by AI etc. Reminds me why I am not on Twitter anymore.
It also strikes me that in this case, the artist didn't even expect to get paid. Apparently, the AI bros even crave the unpaid "exposure" real artists get, without wanting to put in any of the work and while (in most cases) generating results that are no better than spam.
It is a sickening display of narcissism IMHO.
Found an archive of vibe-coding disasters recently - recommend checking it out.
"An AI? But using that you could find a cure for cancer!"
"But I dont want to make a cure for cancer, i want to generate powerpoint presentations. Look it just made this quarterly_report_june_july_jan.wpd file for me."
It's hard to come up with analogies for AI because it's so goddamn stupid. It's like if asbestos was flammable.
it's like leaded gasoline for internet - it makes people stupid and aggressive, kids are hit the worst by it, fallout will be felt for decades, cleanup might be hard to impossible, and ultimately it's a product of corporate greed. except even leaded gasoline solved some problem
it's also like gambling as in hook model. it's like cocaine in that it has been marketed to managerial class as a status symbol of sorts
Looks like itch.io has (hidden/removed/disabled payouts for? reports vary) its vast swath of NSFW-adjacent content which is not great
addendum: itch.io finally put out a statement https://itch.io/updates/update-on-nsfw-content
Hey, I haven't seen this going around yet, but itchio is also taking books down with no erotic content that are just labeled as lgbtqia+
So that's super cool and totally not what I thought they were going to do next 🙃
https://bsky.app/profile/marsadler.bsky.social/post/3luov7rkles2u
And a relevant petition from the ACLU:
https://action.aclu.org/petition/mastercard-sex-work-work-end-your-unjust-policy
Should we give up on all altruist causes because the AGI God is nearly here? the answer may surprise you!
tldr; actually you shouldn't give because the AGI God might not be quite omnipotent and thus would still benefit from your help and maybe there will be multiple Gods, some used for Good and some for Evil so your efforts are still needed. Shrimp are getting their eyeballs cut off right now!