this post was submitted on 22 Jun 2025
777 points (94.4% liked)

Technology

71843 readers
4289 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] brucethemoose@lemmy.world 61 points 1 day ago* (last edited 1 day ago)

I elaborated below, but basically Musk has no idea WTF he’s talking about.

If I had his “f you” money, I’d at least try a diffusion or bitnet model (and open the weights for others to improve on), and probably 100 other papers I consider low hanging fruit, before this absolutely dumb boomer take.

He’s such an idiot know it all. It’s so painful whenever he ventures into a field you sorta know.

But he might just be shouting nonsense on Twitter while X employees actually do something different. Because if they take his orders verbatim they’re going to get crap models, even with all the stupid brute force they have.

[–] FireWire400@lemmy.world 27 points 1 day ago* (last edited 1 day ago)

How high on ketamine is he?

3.5 (maybe we should call it 4)

I think calling it 3.5 might already be too optimistic

[–] Antaeus@lemmy.world 28 points 1 day ago (2 children)

Elon should seriously see a medical professional.

[–] D_C@lemm.ee 10 points 1 day ago (1 children)

I'll have you know he's seeing a medical professional at least once a day. Sometimes multiple times!!!!

(On an absolutely and completely unrelated note ketamine dealers are medical professionals, yeah?)

load more comments (1 replies)
load more comments (1 replies)
[–] JackbyDev@programming.dev 37 points 1 day ago (2 children)

Training an AI model on AI output? Isn't that like the one big no-no?

[–] breecher@sh.itjust.works 24 points 1 day ago (1 children)

We have seen from his many other comments about this, that he just wants a propaganda bot that regurgitates all of the right wing talking points. So that will definitely be easier to achieve if he does it that way.

[–] Schadrach@lemmy.sdf.org 7 points 1 day ago

that he just wants a propaganda bot that regurgitates all of the right wing talking points.

Then he has utterly failed with Grok. One of my new favorite pastimes is watching right wingers get angry that Grok won't support their most obviously counterfactual bullshit and then proceed to try to argue it into saying something they can declare a win from.

load more comments (1 replies)
[–] Deflated0ne@lemmy.world 52 points 1 day ago

Dude is gonna spend Manhattan Project level money making another stupid fucking shitbot. Trained on regurgitated AI Slop.

Glorious.

[–] MehBlah@lemmy.world 9 points 1 day ago

I'm just seeing bakes in the lies.

[–] leftthegroup@lemmings.world 6 points 1 day ago

Instead, he should improve humanity by dying. So much easier and everyone would be so much happier.

[–] TeddE@lemmy.world 9 points 1 day ago

Unironically Orwellian

remember when grok called e*on and t**mp a nazi? good times

[–] Lumidaub@feddit.org 85 points 2 days ago* (last edited 2 days ago) (2 children)

adding missing information

Did you mean: hallucinate on purpose?

Wasn't he going to lay off the ketamine for a while?

Edit: ... i hadnt seen the More Context and now i need a fucking beer or twnety fffffffffu-

[–] Carmakazi@lemmy.world 41 points 2 days ago (1 children)

He means rewrite every narrative to his liking, like the benevolent god-sage he thinks he is.

load more comments (1 replies)
load more comments (1 replies)
[–] Saleh@feddit.org 18 points 1 day ago

We have never been at war with Eurasia. We have always been at war with East Asia

[–] kamen@lemmy.world 12 points 1 day ago

Don't feed the trolls.

[–] StonerCowboy@lemm.ee 8 points 1 day ago
[–] squaresinger@lemmy.world 16 points 1 day ago

First error to correct:

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing ~~information~~ errors and deleting ~~errors~~ information.

[–] oliver@lemmy.1984.network 23 points 1 day ago

So "Deleting errors" meaning rewriting history, further fuckin' up facts and definitely sowing hatred and misinformation. Just call it like it is, techbro‘s new reality. 🖕🏻

[–] bieren@lemmy.zip 9 points 1 day ago

Not sure if has been said already. Fuck musk.

iamverysmart

advanced reasoning

If it's so advanced, it should be able to reason out that all human knowledge is standing in the shoulders of others and how errors have prompted us to explore other areas and learn things we never would have otherwise.

[–] Flukas88@feddit.it 12 points 1 day ago

When you think he can't be more of a wanker with an ameba brain.... He surprises you

[–] Elgenzay@lemmy.ml 22 points 1 day ago (5 children)

Aren't you not supposed to train LLMs on LLM-generated content?

Also he should call it Grok 5; so powerful that it skips over 4. That would be very characteristic of him

[–] Voroxpete@sh.itjust.works 19 points 1 day ago* (last edited 1 day ago) (1 children)

There are, as I understand it, ways that you can train on AI generated material without inviting model collapse, but that's more to do with distilling the output of a model. What Musk is describing is absolutely wholesale confabulation being fed back into the next generation of their model, which would be very bad. It's also a total pipe dream. Getting an AI to rewrite something like the total training data set to your exact requirements, and verifying that it had done so satisfactorily would be an absolutely monumental undertaking. The compute time alone would be staggering and the human labour (to check the output) many times higher than that.

But the whiny little piss baby is mad that his own AI keeps fact checking him, and his engineers have already explained that coding it to lie doesn't really work because the training data tends to outweigh the initial prompt, so this is the best theory he can come up with for how he can "fix" his AI expressing reality's well known liberal bias.

load more comments (1 replies)
load more comments (4 replies)
load more comments
view more: ‹ prev next ›