this post was submitted on 04 Aug 2025
53 points (83.5% liked)

Fuck AI

3626 readers
819 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I am wonder why leftists are in general hostile towards AI. I am not saying this is wrong or right, I just would like someone to list/summarize the reasons.

top 40 comments
sorted by: hot top controversial new old
[–] sundray@lemmus.org 3 points 4 hours ago
[–] SoftestSapphic@lemmy.world 8 points 6 hours ago* (last edited 6 hours ago)

It's bad for the environment, now uses half of all energy produced globally in just a few years.

It's bad for society, automating labor without guaranteeing human needs is really really fucked up and basically kills unlucky people for no good reason.

It's bad for productivity, it is confidently wrong just as often as it is right, the quality of the work is always sub par, it always requires a real person to baby it.

It's bad for human development. We created a machine we can ask anything so we never have to think, but the machine is dumber than anyone using it so it just makes us all brain dead.

It's complete and not getting better. The tech can not get better than it is now unless we create a totally different algorithmic approach and start from scratch again.

It's an artificial hype bubble that distracts us from real solutions to real problems in the world.

[–] nognom@lemmy.ml 2 points 4 hours ago

https://lemmy.world/post/33939597 https://austinpost.com/business/2025/07/17/elon-musk-austin-data-center-planned That's two in tx and tx was already running out of water and has been in a drought for decades.

[–] Jumi@lemmy.world 10 points 7 hours ago (1 children)

It's too energy hungry, it steals art and is giving artists a rough time. It's the pinnacle of dehumanising hypercapitalism.

[–] Landless2029@lemmy.world 3 points 6 hours ago* (last edited 1 hour ago)

On top of every fucking company trying to find ways to replace people with some BS AI solution. Causing layoffs or just less hiring.

Just look at audible replacing human voice actors with AI voices. The backlash there was visceral. Then there's all the companies using AI for customer support. Although it might do a better job than some humans there it still adding a few more steps before you can talk to a human with a pulse.

[–] dukeofdummies@lemmy.world 1 points 5 hours ago

Lots of reasons.

It's yet another thing that is only going to benefit corporations. Because they get unsleeping workers that don't get sick or talk back or strike. They get to charge us for the benefits.

It's built entirely off of everyone else's work and content.

The servers that house the AI are draining water sheds and power grids and in the case of Elon Musk and Tennessee literally poisoning people.

It sounds way too much like a cult. Promising the sun and moon on essentially a chatbot.

It looks way to much like a bubble, our stock market is currently going up because of Nvidia largely.

AI taking over is literally one of the main plot points of Sci fi.

It's infuriatingly blameless. If the AI therapist says "try meth" you can basically only sue the massive faceless organization that built it spinning your wheels for possibly nothing.

Driving people manic by feeding into delusions

It's a God send to scammers, trolls, propaganda, sexual harassment, people in college just for the paper degree at the end, teachers who can't be bothered to write, upper and middle management who can't be bothered to write. Lots of just terrible people.

And for every new protein and material that's discovered because of it. (Some of the most useful and least destructive uses of it) They are immediately gobbled up by patents for a corporation that now has even more control of the universe and everything useful in it. And no human can decide to just open it for the world like vaccines.

[–] Duamerthrax@lemmy.world 15 points 10 hours ago

It's doesn't solve any problem I care about. In fact, it only worsens ones like climate change or wealth inequality.

[–] DarkFuture@lemmy.world 3 points 7 hours ago

I mean I see the benefits, but I also know many humans and most Americans are profoundly stupid, and AI is just going to exacerbate that.

With that in mind, I think it would be best for AI simply not to exist.

[–] vane@lemmy.world 8 points 10 hours ago

Because small number of people benefit from collective creation. Billionaires owned companies benefit from stealing and selling this stolen goods to people, saying them this is a thinking computer.

What if I go to your house, take picture of everything including your face and sell those pictures to porn company ? Are you ok with that ?

[–] RecallMadness@lemmy.nz 4 points 9 hours ago

Not mentioned so far:

  • the costs involved in creating a market competitive AI is high, restricting who can enter the market. Effectively funneling control of what the magic answer box says to a select few.

I will not be surprised when they collectively stop mentioning some facts, then slowly start to deny them, then start to gaslight you into thinking you made it up.

[–] morphballganon@mtgzone.com 18 points 13 hours ago

Modern LLMs, incorrectly labeled as "AI," are just the modern version of spell-check.

You know how often people create totally embarrassing mistakes and blame spell-check?

"AI" is another one of those.

And it also requires tons of water that could be going to people's homes.

[–] queermunist@lemmy.ml 24 points 14 hours ago

I'm against the massive, wasteful data centers that are destroying all climate targets and driving up water/electricity prices in communities. Their current trajectory is putting us on a collision course with civilization collapse.

If the slop could be generated without these negative externalities I don't know if I'd be against it. China has actually made huge strides in reducing the power and water footprint of training and usage, so there's maybe some hope that the slop machines won't destroy the world. I'm not optimistic, though.

This seems like a dead-end technology.

[–] alexc@lemmy.world 25 points 14 hours ago

I see two reasons. Most people that are “left leaning” value both critical thinking and social fairness. AI subverts both of those traits. Firstly by definition it bypasses the “figure it out” stage of learning. The second way is by ignoring long establish laws like copyright to train its models, but also its implementation which sees people lose their jobs

More formally, it’s probably one of the purest forms of capitalism. It’s essentially a slave laborer, with no rights of ability to complain that further concentrates wealth with the wealthy.

[–] Lexam@lemmy.world 56 points 18 hours ago

AI removes critical thinking for you.

[–] baggachipz@sh.itjust.works 41 points 17 hours ago (1 children)

Yes, I’m left-leaning, and I dislike what’s currently called “ai” for a lot of the left-leaning (rational) reasons already listed. But I’m a programmer by trade, and the real reason I hate it is that it’s bullshit and a huge scam vehicle. It makes NFTs look like a carnival game. This is the most insane bubble I’ve seen in my 48 years on the planet. It’s worse than the subprime mortgage, “dot bomb”, and crypto scams combined.

It is, at best, a quasi-useful tool for writing code (though the time it has saved me is mostly offset by the time it’s been wrong and fucked up what I was doing). And this scam will eventually (probably soon) collapse and destroy our economy, and all the normies will be like “how could anybody have known!?” I can see the train coming, and CEOs, politicians, average people, and the entire press insist on partying on the tracks.

[–] IAmNorRealTakeYourMeds@lemmy.world 14 points 16 hours ago (1 children)

when copilot came out and it was nothing more than an extremely fancy auto complete.

that was peak, I'd still write the logic and algorithms and the important bits, it just saved time by quickly writing the line when it got it right, it all went downhill from there.

[–] Landless2029@lemmy.world 3 points 5 hours ago (1 children)

I prefer using LMMs for tech debt stuff like starting a readme and doing comments.

I do the real brain work and the end product looks nicer.

Sudo code (baseline comments), real code, dev/test, LMM to add more words after. Smack it when it touches my code.

[–] IAmNorRealTakeYourMeds@lemmy.world 3 points 5 hours ago (1 children)

LLMs can be a great assistant. it's like having an intern doing the tedious work while you get to just approve and manage it.

but letting it run the show is like letting the intern manage the whole development unsupervised.

[–] Landless2029@lemmy.world 3 points 5 hours ago (1 children)

Like the AI that was given access to prod, deleted a database and lied about it. The company also didn't have a backup.

Source: Replit's CEO apologizes after its AI agent wiped a company's code base in a test run and lied about it

those are the fuckups that become public. there will be a lot of major fuck up.

[–] TomMasz@piefed.social 16 points 15 hours ago
  • Runs ragged over copyright
  • Enables mass layoffs
  • Depresses salaries
  • Ruins everything
  • Degrades critical thinking
[–] 9tr6gyp3@lemmy.world 119 points 20 hours ago (1 children)

It steals from the copyright holders in order to make corporate AI money without giving back to the creators.

It uses insane amounts of water and energy to function, with demand not being throttled by these companies.

It gives misleading, misquoted, misinformed, and sometimes just flat out wrong information, but abuses its very confidence-inspiring language skills to pass it off as the correct answer. You HAVE to double check all its work.

And if you think about it, it doesn't actually want to lick a lollipop, even if it says it does. Its not sentient. I repeat, its not alive. The current design is a tool at best.

[–] 33550336@lemmy.world 16 points 20 hours ago

Thank you, for the sake of completeness, I'd add something like this: https://time.com/6247678/openai-chatgpt-kenya-workers/

[–] 20cello@lemmy.world 68 points 20 hours ago

Because they're obviously a tool for the rich to get more control over our lives

[–] lIlIllIlIIIllIlIlII@lemmy.zip 1 points 8 hours ago

For me the issue about LLMs is people not knowing wat is it and using it wrong.

[–] matelt@feddit.uk 23 points 17 hours ago

Personally I think the environmental impact and the sycophantic responses that take away the need for one to exercise their brain are my 2 biggest gripes.

It was a fun novelty at first, I remember my first question to chat gpt was 'how to make hamster ice cream' and I was genuinely surprised that it gave me some frozen fruit recipe along with a plea to not harm hamsters by turning them into ice cream.

Then it got out of hand very quickly, it got added onto absolutely everything, despite the hallucinations and false facts. The intellectual property issue is also of concern.

[–] HakFoo@lemmy.sdf.org 11 points 15 hours ago

It's being shoved at us.

Most new tech starts with a narrow legit use csse or an enthusiast culture and gradually works to a breakout moment where everyone wants it. Think of cars in 1900 vs 1925 or home computers in 1976 vs 1999. Also note plenty of new tech fails to go mainstream no matter how much effort went into it. 3D TVs, turbine locomotives, non-photovoltaic solar: they tried but didn't really make it.

Capital has decided AI will be the next thing and they want it now, so they refuse to let the process run. They can't wait for a product that solves real faults with the current designs (inefficiency, hallucinations) or does something people actually want (nobody asked for extra fingers) before stuffing it in everything.

[–] BillDaCatt@lemmy.world 47 points 20 hours ago* (last edited 20 hours ago) (1 children)

Can't speak for anyone else, but here are a few reasons I avoid Ai:

  • AI server farms consume a stupid amount of energy. Computers need energy, I get it, but Ai's need for energy is ridiculous.

  • Most of the implementations of Ai seem to be after little to no input from the people who will interact with it and often despite their objections.

  • The push for implementing Ai seems to be based on the idea that companies might be able to replace some of their workforce compounded with the fear of being left behind if they don't do it now.

  • The primary goal of any Ai system seems to be about collecting information about end users and creating a detailed profile. This information can then be bought and sold without the consent of the person being profiled.

  • Right now, these systems are really bad at what they do. I am happy to wait until most of those bugs are worked out.

To be clear, I absolutely want a robot assistant, but I do not want someone else to be in control of what it can or cannot do. If I am using it and giving it my trust, there cannot be any third parties trying to monetize that trust.

[–] 33550336@lemmy.world 11 points 20 hours ago

Well I personally also avoid using AI. I just don't trust the results and I think using it makes mentally lazy (besides the other bad things).

[–] ninjabard@lemmy.world 33 points 20 hours ago* (last edited 20 hours ago) (1 children)

It's generative and LLM AI that is the issue.

It makes garbage facsimiles of human work and the only thing CEOs can see is spending less money so they can horde more of it. It also puts pressure on resource usage, like water and electricity. Either by using it for cooling the massive data centers or by simply the power draw needed to compute whatever prompt.

The other main issue is that it is theft plain and simple. Artists, actors, voice actors, musicians, creators, etc are at risk of having their jobs stolen by a greedy company that only wants to pay for a thing once or not at all. You can get hired once to read or be photographed/videoed and then that data can be used to train a digital replacement without your consent. That was one of the driving forces behind the last big actor's union protests.

For me, it's also the lack of critical thinking skills using things like ChatGPT fosters. The thought that one doesn't have to put any effort into writing an email, an essay, or even researching something when you can simply type in a prompt and it spits out mainly incorrect information. Even simple information. I had an AI summary tell me that 440Hz was a higher pitch than 446Hz. I wasn't even searching for that information. So, it wasted energy and my time giving demonstrably wrong data I had no need for.

[–] 33550336@lemmy.world 5 points 20 hours ago

Thank you. Well, personally I do not use ChatGPT and this is one of the reasons why I asked humans this question :)

[–] paulbg@programming.dev 21 points 20 hours ago

bc those who own AI are against left-leaners' principles.

[–] Krauerking@lemy.lol 4 points 14 hours ago

I like society, i like people, and i like them getting to be craftspeople of all kinds, and relying on their expertise to make my own life more interesting by proxy while offering my own skills to the pot.

[–] brucethemoose@lemmy.world 5 points 16 hours ago

Counterpoint: are right leaners “pro AI”?

I feel like there’s a big distinction between tech bros and, say, MAGA diehards.

…And again, this is a very artificial polarization, as talk to any ML researcher or tinkerer, and they will hate the guts of Sam Altman or Elon Musk.

[–] ZDL@lazysoci.al 11 points 20 hours ago (1 children)

Whose definition of "leftist"?

The Communist Party of China is throwing a whole lot of money at AI in its various forms integrating it into all kinds of services. They're not against it. They're against the willy-nilly shoving of it into everything without any thought given to its negatives. Is that leftist enough for you?

Or do you mean the faux-left of the USA (which is, on its most extreme end a, moderate centre-left in sane parts of the world)? If that's the "leftists" you mean, I'd guess it's largely based on (focusing here on LLMbeciles and other popular degenerative AI forms):

  1. The environmental disaster (in terms of energy used to train and operate, as well as the water costs) that the rapacious capitalists cause with their grossly inefficient LLMbecile implementations.
  2. Most artists, being educated more than the average, tend to lean left and the main applications of degenerative AI is aimed straight at them.
  3. The mass theft of human culture from around the world to feed the machines, only to have them churn out shit writing, shit pictures, shit music, shit videos, etc.
  4. The very obvious fact that the technology cannot ever actually be profitable; it's clearly a pump and dump stock scheme that's sucking money away from actually productive elements of society to line the pockets of grifting billionaires.
  5. The general pairing of LLMbeciles and other degenerative AI forms with techbrodude "consent, what's that?" bullshit. It gets crammed into places nobody wants it, there's no plausible way to turn it off (Microsoft...), there's no way to be sure it has been turned off when you try (techbrodudes are prone to being lying sacks of shit).

In general, "leftists" don't like degenerative AI because they're on the whole better-educated than "rightists" and thus know shit like the points I brought up ... among dozens of others. And they're smart enough to know that if crony-capitalists are pushing it as the greatest thing since sliced bread it's probably bad for humanity.

Or do you genuinely believe people like Musk, or Thiel, or Bezos, or Zuckerberg, or ... are interested in humanity?

[–] 33550336@lemmy.world 1 points 20 hours ago (1 children)

Thank you for the explaination, by leftists I meant the ones I met most frequently, that is, the lemmy community :)

[–] ZDL@lazysoci.al -5 points 17 hours ago (1 children)

See to me, Lemmy at its most left barely qualifies as left wing from what I see.

I've seen a lot of people cosplaying as leftists here, but prick an American leftist and they typically bleed a fascist.

[–] HappyFrog@lemmy.blahaj.zone 2 points 12 hours ago (1 children)

What would you consider 'left' then?

[–] ZDL@lazysoci.al 1 points 6 hours ago

People actually doing something instead of raging on social media to "raise awareness". People actually engaging the supposed beneficiaries of their (usually middle-class white) largesse. People less interested in labelling and finger-pointing and more interested in getting out there, getting their hands dirty, and helping.

[–] fodor@lemmy.zip 6 points 20 hours ago

Is your premise even correct? I don't have any data indicating that leftists are anti-AI. Do you?

Also, I'm not sure what you mean by anti-AI. Pointing out that it is snake oil is a factual claim. In other words, if AI is a label that's mostly devoid of meaning, then attacking it is actually attacking the sleazy opportunistic salespeople, and not necessarily the underlying tech (when it is clearly defined, which is rare). Of course one could oppose both.

Which is to say, many leftists are anti-billionaire, and many billionaires are riding the AI bubble. But you already know that, so probably that's not what you're asking.