this post was submitted on 05 Jul 2025
22 points (95.8% liked)

Hacker News

1923 readers
608 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] lvxferre@mander.xyz 17 points 1 day ago (1 children)

I'm not. You can't lose trust on something if you never trusted it to begin with.

I. Talent churn reveals short AGI timelines are wish, not belief

Trying to build AGI out of LLMs and similar is like trying to build a house, by randomly throwing bricks. No cement, no foundation, just the bricks. You might want to get some interesting formation of bricks, sure. But you won't get a house.

And yes, of course they're bullshitting with all this "AGI IS COMING!". Odds are the people in charge of those companies know the above. But lying for your own benefit, when you know the truth, is called "marketing".

II. The focus on addictive products shows their moral compass is off

"They", who? Chatbots are amoral, period. Babbling about their moral alignment is like saying your hammer or chainsaw is morally bad or good. It's a tool dammit, treat it as such.

And when it comes to the businesses, their moral alignment is a simple "money good, anything between money and us is bad".

III. The economic engine keeping the industry alive is unsustainable

Pretty much.

Do I worry that the AI industry is a quasi-monopoly? No, I don’t understand what that means.

A quasi-monopoly, in a nutshell, is when a single entity or group of entities have an unreasonably large control over a certain industry/market, even if not being an "ackshyual" monopoly yet.

A funny trait of the fake free-market capitalist that O’Reilly warns us about is that their values are always very elevated and pure, but only hold until the next funding round.

That's capitalism. "I luuuv freerum!" until it gets in the way of the money.

IV. They don’t know how to solve the hard problems of LLMs

Large language models (LLMs) still hallucinate. Over time, instead of treating this problem as the pain point it is, the industry has shifted to “in a way, hallucinations are a feature, you know?”

Or rather, they shifted the bullshit. They already knew it was an insolvable problem...

...because hallucinations are simply part of the LLM doing what it's supposed to do. It doesn't understand what it's outputting; it doesn't know if glue is a valid thing to add to a pizza, or if humans should eat rocks. It's simply generating text based on the corpus fed into it, plus some weighting.

V. Their public messaging is chaotic and borders on manipulative

O rly.

Stopped reading here. It's stating the obvious, and still missing the point.

[–] Fandangalo@lemmy.world 2 points 1 hour ago

The moral compass bit is hilarious. Large swaths of tech haven’t had morals in decades when it comes to business, if ever. See google’s canary “do good” being gone. People have fully drank the “greed is good” koolaid for years.