this post was submitted on 07 May 2025
734 points (100.0% liked)

TechTakes

1837 readers
237 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] deedan06_@lemmy.dbzer0.com 1 points 2 days ago (2 children)

Ai bro here. The reason there shit aint selling is because its useless for any actual ai aplication. Ai runs on gpus even an ai cpu will be so much slower than what an nvidea gpu can do. Of course no one buys it. Nvideas gpus still sell very well, and not just because of the gamers.

[–] self@awful.systems 22 points 2 days ago

ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “useful for any actual ai application” is to throw a shitload of money at nvidia. weird how that works!

[–] cubism_pitta@lemmy.world -5 points 2 days ago* (last edited 2 days ago) (1 children)

A lot of these systems are silly because they don't have a lot of RAM and things don't begin to get interesting with LLMs until you can run 70B and above

The Mac Studio has seemed an affordable way to achieve running 200B+ models mainly due to the unified memory architecture (compare getting 512GB of RAM in a Mac Studio to building a machine with enough GPU to get there)

If you look the industry in general is starting to move towards that sort of design now

https://frame.work/desktop

The framework desktop for instance can be configured with 128GB of RAM ($2k) and should be good for handling 70B models while maintaining something that looks like efficiency.

You will not train, or refine models with these setups (I think you would still benefit from the raw power GPUs offer) but the main sticking point in running local models has been VRAM and how much it costs to get that from AMD / Nvidia

That said, I only care about all of this because I mess around with a lot of RAG things. I am not a typical consumer

[–] self@awful.systems 12 points 2 days ago (1 children)

ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “not silly” is to throw a shitload of money at Apple or framework or whichever vendor decided to sell pickaxes this time for more RAM. yes, very interesting, thank you, fuck off