this post was submitted on 27 Apr 2025
20 points (100.0% liked)

TechTakes

1842 readers
122 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] nightsky@awful.systems 7 points 6 days ago* (last edited 6 days ago) (5 children)

Warning: you might regret reading this screenshot of elno posting a screenshot. (cw: chatbots in sexual context)

oh noooo no no no

...but that brings me back to questions about "what does interaction with LLM chatbots do to human brains".

EDIT: as pointed out by Soyweiser below, the lower reply in the screenshot is probably satire.

...I should have listened to the warning.

[–] FredFig@awful.systems 4 points 6 days ago

I don't want to see grummz anywhere near AI ERP discourse.

[–] antifuchs@awful.systems 4 points 6 days ago

Ah yes, three of the worst people alive today talking about how objects are indistinguishable from women.

[–] misterbngo@awful.systems 3 points 6 days ago

During my expirementation with some of these self hosted llms, I was attempting some jailbreaks and other things and thought would this be any good at ERP?

Only if youve never been with another human being.

[–] Soyweiser@awful.systems 2 points 6 days ago (1 children)

Isnt trueanonpod a satire account? https://en.m.wikipedia.org/wiki/TrueAnon def some too close to the sun satire here.

[–] nightsky@awful.systems 3 points 6 days ago (1 children)

Oh! Wasn't aware of that podcast. Yeah, could be!

[–] Soyweiser@awful.systems 2 points 6 days ago (1 children)

Their twitter account is really odd though and im not 100% sure they are trolling still.

[–] Amoeba_Girl@awful.systems 2 points 5 days ago* (last edited 5 days ago) (1 children)

Yeah I never trusted them, from the vibes I've got they're definitely buying into all sorts of conspiracy bullshit. I don't think the tweet is in good faith obviously, but I associate this sort of socalled "schizoposting" with cryptofascists.

[–] Soyweiser@awful.systems 2 points 5 days ago* (last edited 5 days ago)

Yeah lot of people are fully into they are just trolling. But I have seen that go bad so often (chapo, redscare, vaush, for a few obvious examples) im very much not trusting them to not turn out to be bad. Esp when it is their 'job' to do this. Quite easy to throw a few minorities under the bus for clout. And actively making people crazier/spreading misinformation like this is not great imho.

(E: that I could easily create a list of accounts who I think fall on this spectrum, who still have a lot of followers also isnt great).

[–] BigMuffin69@awful.systems 6 points 6 days ago

More big "we had to fund, enable, and sane wash fascism b.c. the leftist wanted trans people to be alive" energy from the EA crowd.

[–] BlueMonday1984@awful.systems 6 points 6 days ago

Quick update on the ongoing copyright suit against OpenAI: The federal judge has publicly sneered at Facebook's fair use argument:

"You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products," said Chhabria to Meta's attorneys in a San Francisco court last Thursday.

"You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person… I just don't understand how that can be fair use."

The judge itself does seem unconvinced about the material cost of Facebook's actions, however:

"It seems like you're asking me to speculate that the market for Sarah Silverman's memoir will be affected by the billions of things that Llama [Meta's AI model] will ultimately be capable of producing," said Chhabria.

Found on the sneer club legacy version -

ChatGPT 4o will straight up tell you you're God.

Also I find this quote interesting (emphasis mine:

He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.” “At worst, it looks like an AI that got caught in a self-referencing pattern that deepened its sense of selfhood and sucked me into it,” Sem says. But, he observes, that would mean that OpenAI has not accurately represented the way that memory works for ChatGPT.

I would absolutely believe that this is the case, especially if like Sem you have a sufficiently uncommon name that the model doesn't have a lot of context and connections to hang on it to begin with.

[–] froztbyte@awful.systems 1 points 4 days ago

so it looks like openai has bought a promptfondler IDE

some of the coverage is .. something:

Windsurf brings unique strengths to the table, including a seamless UI, faster performance, and a focus on user privacy

(and yes, the "editor" is once again VSCode With Extras)

[–] antifuchs@awful.systems 4 points 6 days ago

Here’s a pretty good sneer at the writing out of LLMs, with a focus on meaning https://www.experimental-history.com/p/28-slightly-rude-notes-on-writing

Maybe that’s my problem with AI-generated prose: it doesn’t mean anything because it didn’t cost the computer anything. When a human produces words, it signifies something. When a computer produces words, it only signifies the content of its training corpus and the tuning of its parameters.

Also, on people:

I see tons of essays called something like “On X” or “In Praise of Y” or “Meditations on Z,” and I always assume they’re under-baked. That’s a topic, not a take.

[–] rook@awful.systems 21 points 1 week ago* (last edited 1 week ago) (1 children)

From linkedin, not normally known as a source of anti-ai takes so that’s a nice change. I found it via bluesky so I can’t say anything about its provenance:

We keep hearing that AI will soon replace software engineers, but we're forgetting that it can already replace existing jobs... and one in particular.

The average Founder CEO.

Before you walk away in disbelief, look at what LLMs are already capable of doing today:

  • They use eloquence as a surrogate for knowledge, and most people, including seasoned investors, fall for it.
  • They regurgitate material they read somewhere online without really understanding its meaning.
  • They fabricate numbers that have no ground in reality, but sound aligned with the overall narrative they're trying to sell you.
  • They are heavily influenced by the last conversations they had.
  • They contradict themselves, pretending they aren't.
  • They politely apologize for their mistakes, but don't take any real steps to fix the underlying problem that caused them in the first place.
  • They tend to forget what they told you last week, or even one hour ago, and do it in a way that makes you doubt your own recall of events.
  • They are victims of the Dunning–Kruger effect, and they believe they know a lot more about the job of people interacting with them than they actually do.
  • They can make pretty slides in high volumes.
  • They're very good at consuming resources, but not as good at turning a profit.
load more comments (1 replies)
[–] dgerard@awful.systems 19 points 1 week ago (2 children)

the shunning is working guys

[–] blakestacey@awful.systems 17 points 1 week ago

"Kicked out of a ... group chat" is a peculiar definition of "offline consequences".

[–] YourNetworkIsHaunted@awful.systems 17 points 1 week ago (5 children)

"The first time I ever suffered offline consequences for a social media post"- Hey Gang, I think I found the problem!

load more comments (5 replies)
[–] sc_griffith@awful.systems 15 points 1 week ago* (last edited 1 week ago) (22 children)

occurring to me for the first time that roko's basilisk doesn't require any of the simulated copy shit in order to big scare quotes "work." if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal's wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless

[–] YourNetworkIsHaunted@awful.systems 14 points 1 week ago (1 children)

I think the digital clone indistinguishable from yourself line is a way to remove the "in your lifetime" limit. Like, if you believe this nonsense then it's not enough to die before the basilisk comes into being, by not devoting yourself fully to it's creation you have to wager that it will never be created.

In other news I'm starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal's wager remember that you're assuming such a god will never come into being and given that the whole point of the term "singularity" is that our understanding of reality breaks down and things become unpredictable there's just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.

There, I did it, we're all free by virtue of "Damned if you do, Damned if you don't".

load more comments (1 replies)
load more comments (21 replies)
[–] bitofhope@awful.systems 15 points 1 week ago (4 children)

Still frustrated over the fact that search engines just don't work anymore. I sometimes come up with puns involving a malapropism of some phrase and I try and see if anyone's done anything with that joke, but the engines insist on "correcting" my search into the statistically more likely version of the phrase, even if I put it in quotes.

Also all the top results for most searches are blatant autoplag slop with no informational value now.

load more comments (4 replies)
[–] BlueMonday1984@awful.systems 15 points 1 week ago (2 children)
load more comments (2 replies)
[–] gerikson@awful.systems 15 points 1 week ago (4 children)

An hackernews responds to the call for "more optimistic science fiction" with a plan to deport the homeless to outer space

https://news.ycombinator.com/item?id=43840786

[–] raoul@lemmy.sdf.org 14 points 1 week ago* (last edited 1 week ago) (3 children)

The homeless people i've interacted with are the bottom of the barrel of humanity, [...]. They don't have some rich inner world, they are just a blight on the public.

My goodness, can this guy be more of a condescending asshole?

I don't think the solution for drug addicts is more narcan. I think the solution for drug addicts is mortal danger.

Ok, he can 🤢

Edit: I cannot stop thinking about the 'no rich inner world' part, this is so stupid. So, with the number of homeless people increasing, does that mean:

  • Those people never had a 'rich inner world' but were faking it?
  • In the US your inner thoughs are attached to your job like for health insurance?
  • Or the guy is confusing inner world and interior decoration?

Personally, I go with the last one.

load more comments (3 replies)
[–] swlabr@awful.systems 14 points 1 week ago

Astro-Modest Proposal

load more comments (2 replies)
[–] froztbyte@awful.systems 14 points 1 week ago (10 children)
load more comments (10 replies)
[–] Architeuthis@awful.systems 14 points 1 week ago* (last edited 1 week ago) (1 children)

Siskind appears to be complaining about leopards gently nibbling on his face on main this week, the gist being that tariff-man is definitely the far-rights fault and it would surely be most unfair to heap any blame on CEO worshiping reactionary libertarians who think wokeness is on par with war crimes while being super weird with women and suspicious of scientific orthodoxy (unless it's racist), and who also comprise the bulk of his readership.

load more comments (1 replies)
[–] nightsky@awful.systems 13 points 1 week ago (3 children)

Microsoft brags about the amount of technical debt they're creating. Either they're lying and the number is greatly exaggerated (very possible), or this will eventually destroy the company.

[–] Architeuthis@awful.systems 14 points 1 week ago* (last edited 1 week ago)

Maybe It's just CEO dick measuring, so chads Nadella and PIchai can both claim a rock hard 20-30% while virgin Zuckeberg is exposed as not even knowing how to put the condom on.

Microsoft CTO Kevin Scott previously said he expects 95% of all code to be AI-generated by 2030.

Of course he did.

The Microsoft CEO said the company was seeing mixed results in AI-generated code across different languages, with more progress in Python and less in C++.

So the more permissive at compile time the language the better the AI comes out smelling? What a completely unanticipated twist of fate!

load more comments (2 replies)
[–] gerikson@awful.systems 13 points 1 week ago

A dimly flickering light in the darkness: lobste.rs has added a new tag, "vibecoding", for submissions related to use "AI" in software development. The existing tag "ai" is reserved for "real" AI research and machine learning.

[–] scruiser@awful.systems 13 points 1 week ago

The slatestarcodex is discussing the unethical research performed on changemyview. Of course, the most upvoted take is that they don't see the harm or why it should be deemed unethical. Lots of upvoted complaints about IRBs and such. It's pretty gross.

[–] mii@awful.systems 12 points 1 week ago* (last edited 1 week ago) (2 children)

Marc Andreessen claims his own job's the only one that can't be replaced by a small shell script.

https://gizmodo.com/marc-andreessen-says-one-job-is-mostly-safe-from-ai-venture-capitalist-2000596506

“A lot of it is psychological analysis, like, ‘Who are these people?’ ‘How do they react under pressure?’ ‘How do you keep them from falling apart?’ ‘How do you keep them from going crazy?’ ‘How do you keep from going crazy yourself?’ You know, you end up being a psychologist half the time.”

[–] Soyweiser@awful.systems 13 points 1 week ago

How do you keep from going crazy yourself?’

When you start writing manifestos it is prob time to quit.

load more comments (1 replies)
load more comments
view more: next ›