this post was submitted on 03 Aug 2025
420 points (86.5% liked)

Fuck AI

3635 readers
1381 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

top 50 comments
sorted by: hot top controversial new old
[–] vivalapivo@lemmy.today 17 points 1 day ago

First of all, intellectual property rights do not protect the author. I'm the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.

Secondly, your personal carbon footprint is bullshit.

Thirdly, everyone in the picture is an asshole.

[–] Hadriscus@jlai.lu 1 points 21 hours ago

Honestly I have nothing to add

[–] burgerpocalyse@lemmy.world 3 points 1 day ago

this post is no man's land

[–] khaleer@sopuli.xyz 9 points 1 day ago* (last edited 1 day ago)

I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd

[–] anarchiddy@lemmy.dbzer0.com 11 points 1 day ago

I sure am glad that we learned our lesson from the marketing campaigns in the 90's that pushed consumers to recycle their plastic single-use products to deflect attention away from the harm caused by their ubiquitous use in manufacturing.

Fuck those AI users for screwing over small creators and burning down the planet though. I see no problem with this framing.

[–] BradleyUffner@lemmy.world 6 points 1 day ago (7 children)

They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.

[–] pjwestin@lemmy.world 2 points 1 day ago

I think the public domain would be fair game as well, and the fact that AI companies don't limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.

load more comments (6 replies)
[–] ZMoney@lemmy.world 5 points 1 day ago (3 children)

So I'll be honest. I use GPT to write Python scripts for my research. I'm not a coder and I don't want to be one, but I do need to model data sometimes and I find it incredibly useful that I can tell it something in English and it can write modeling scripts in Python. It's also a great way to learn some coding basics. So please tell me why this is bad and what I should do instead.

[–] Tartas1995@discuss.tchncs.de 2 points 1 day ago

I think sometimes it is good to replace words to reevaluate a situation.

Would "I don't want to be one" be a good argument for using ai image generation?

[–] person420@lemmynsfw.com 6 points 1 day ago

Didn't you read the post? You're bad and should feel bad.

[–] DegenerateSupreme@lemmy.zip 3 points 1 day ago

I'd say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person's use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what's wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about "innovation" and beating China at another dick-measuring contest.

The other concern is that ChatGPT's ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model's training. As the adage goes, "AI allows wealth to access talent, while preventing talent from accessing wealth." But since a ridiculous amount of data goes into these models, it's an amorphous ethical issue that's understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.

By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we'll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).

As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.

[–] gmtom@lemmy.world 44 points 2 days ago (16 children)

I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

This work has already saved thousands of peoples lives.

But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

[–] starman2112@sh.itjust.works 27 points 2 days ago* (last edited 2 days ago) (15 children)

Nobody has a problem with this, it's generative AI that's demonic

[–] HalfSalesman@lemmy.world 11 points 1 day ago* (last edited 1 day ago) (3 children)

Generative AI uses the same technology. It learns when trained on a large data set.

load more comments (3 replies)
load more comments (14 replies)
load more comments (15 replies)
[–] kartoffelsaft@programming.dev 111 points 2 days ago (12 children)

I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.

But also:

Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who'll accept you instead. It's disgustingly twitter-brained. It's a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.

Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? "That time you used ChatGPT to recall the word 'verisimilar' makes you an evil person." is what they hear. And at that moment you've cut that person off from ever actually considering your opinion ever again. Even if you're right that's not healthy.

load more comments (12 replies)
[–] kopasz7@sh.itjust.works 84 points 2 days ago (42 children)

My issues are fundsmentally two fold with gen AI:

  1. Who owns and controls it (billionares and entrenched corporations)

  2. How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)

I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers' house of cards.

load more comments (42 replies)
load more comments
view more: next ›