this post was submitted on 29 Jul 2025
291 points (86.3% liked)
Asklemmy
50967 readers
480 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Right now, anti-AI rhetoric is taking the same unprincipled rhetoric that the Luddites pushed forward in attacking machinery. They identified a technology linked to their proletarianization and thus a huge source of their new misery, but the technology was not at fault. Capitalism was.
What generative AI is doing is making art less artisinal. The independent artists are under attack, and are being proletarianized. However, that does not mean AI itself is bad. Copyright, for example, is bad as well, but artists depend on it. The same reaction against AI was had against the camera for making things like portraits and still-lifes more accessible, but nowadays we would not think photography to be anything more than another tool.
The real problems with AI are its massive energy consumption, its over-application in areas where it actively harms production and usefulness, and its application under capitalism where artists are being punished while corporations are flourishing.
In this case, there's no profit to be had. People do not need to hire artists to make a banner for a niche online community. Hell, this could have been made using green energy. These are not the same instances that make AI harmful in capitalist society.
Correct analysis of how technologies are used, how they can be used in our interests vs the interests of capital, and correct identification of legitimate vs illegitimate use-cases are where we can succeed and learn from the mistakes our predecessors made. Correct identification of something linked to deteriorating conditions combined with misanalyzing the nature of how they are related means we come to incorrect conclusions, like when the Luddites initially started attacking machinery, rather than organizing against the capitalists.
Hand-created art as a medium of human expression will not go away. AI can't replace that. What it can do is make it easier to create images that don't necessarily need to have that purpose, as an expression of the human experience, like niche online forum banners or conveying a concept visually. Not all images need to be created in artisinal fashion, just like we don't need to hand-draw images of real life when a photo would do. Neither photos nor AI can replace art. Not to mention, but there is an art to photography as well, each human use of any given medium to express the human experience can be artisinal.
The Luddites weren't simply "attacking machinery" though, they were attacking the specific machinery owned by specific people exploiting them and changing those production relations.
And due to the scale of these projects and the amount of existing work they require in their construction, there are no non-exploitative GenAI systems
That hasn't been true for years now.
AI training techniques have rapidly improved to the point where they allow people to train completely new diffusion models from scratch with a few thousand images on consumer hardware.
In addition, and due to these training advancements, some commercial providers have trained larger models using artwork specifically licensed to train generative models. Adobe Firefly, for example.
It isn't the case, and hasn't been for years, that you can simply say that any generative work is built on """stolen""" work.
Unless you know what model the person used, it's just ignorance to accuse them of using "exploitative" generative AI.
You are probably confusing fine tuning with training. You can fine tune an existing model to produce more output in line with sample images, essentially embedding a default "style" into every thing it produces afterwards (Eg. LoRAs). That can be done with such a small image size, but it still requires the full model that was trained on likely billions of images.