this post was submitted on 07 May 2025
734 points (100.0% liked)
TechTakes
1838 readers
209 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
ah yes the only way to make LLMs, a technology built on plagiarism with no known use case, “not silly” is to throw a shitload of money at Apple or framework or whichever vendor decided to sell pickaxes this time for more RAM. yes, very interesting, thank you, fuck off
Built on copyright infringement, not plagarism (the data scientists that built them aren't going around pretending all the training content was their ideation/creation). There are so many NLP tasks and areas of research that have been wholly obsoleted due to LLMs including sentiment analysis, summarization, translation, NER, data extraction, etc., so I find the "no known use case" to be rather ignorant. The first consumer computers also had tremendous costs. Paradigm shifts in technology are never cheap until scaling.
I think this comment is unfair toward @cubism_pitta@lemmy.world and you should reflect a bit on why that is.
also you’re right, I was unfair towards cubism_pitta. I wasn’t enough of an asshole.
hey fuckers. next time you get the sudden, overwhelming urge to jack off in public about how much money you’re feeding into the machine that does plagiarism and nothing else, keep it the fuck off my instance.
soz kid, you’re definitely not tall enough for this ride. maybe go over to the back there and have a lollipop
Enclosed please find one (1) complimentary ticket to the egress.
LOL. I checked the modlog. Man, the monocle fell off that sealion real quick.
it’s like a magic phrase! remarkably effective
lol
okwhateverdude
OK, I will reflect on why you think that comment was unfair.