this post was submitted on 29 Jul 2025
288 points (86.2% liked)

Asklemmy

49785 readers
523 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS
 

In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn't been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn't it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!

you are viewing a single comment's thread
view the rest of the comments
[–] sanguinepar@lemmy.world 62 points 1 week ago (2 children)

That doesn't change that real artists who made real art will have had their work used without permission or payment to help generate the banner. I'm with OP.

[–] jsomae@lemmy.ml 9 points 1 week ago (2 children)

If I drew something myself, those artists would also not be paid. I can understand a deontological argument against using AI trained on people's art, but for me, the utilitarian argument is much stronger -- don't use AI if it puts an artist out of work.

[–] BennyTheExplorer@lemmy.world 33 points 1 week ago (3 children)

It's not about anyone getting paid, it's about affording basic respect and empathy to people and their work. Using AI sends a certain message of 'I don't care about your consent or opinion towards me using your art", and I don't think, that this is a good thing for anyone.

[–] RaivoKulli@sopuli.xyz 2 points 6 days ago (1 children)

I mean how many of us are pirating stuff

[–] Evotech@lemmy.world -1 points 6 days ago* (last edited 6 days ago) (1 children)

Thank you, you can’t both love piracy (which Kenny overwhelmingly does) and hate AI

[–] dil@lemmy.zip 1 points 5 days ago* (last edited 5 days ago) (1 children)

plenty of examples where piracy harms no one devs get paid no matter what, ppl working on and making shows like south park that have 5 year deals, many devs get fired right after a game gets released they dont benefit if it does well, indie games i never pirate, I use the 2 hour steam window instead to see if I want it

ai on the other hand lol, actively takes away jobs

[–] Evotech@lemmy.world 1 points 5 days ago (2 children)

There would be no job designing a lemmy banner

[–] dil@lemmy.zip 1 points 5 days ago

I'm glad I don't think like you, thatd be a confusing time

[–] dil@lemmy.zip 1 points 5 days ago

It's sad that you think that is what I was arguing

[–] jsomae@lemmy.ml 6 points 1 week ago (1 children)

Well yeah, I don't care about IP rights. Nothing has been materially stolen, and if AI improves, then the result could some day in theory be indistinguishable from a human who was merely "inspired" by an existing piece of art. At the end of the day, the artist is not harmed by AI plagiarism; the artist is harmed by AI taking what could have been their job.

[–] sanguinepar@lemmy.world 5 points 6 days ago (1 children)
[–] jsomae@lemmy.ml 5 points 6 days ago (1 children)
[–] patatas@sh.itjust.works -5 points 6 days ago (1 children)

By systems positing human creativity as a computational exercise

[–] jsomae@lemmy.ml 3 points 6 days ago (1 children)

the human brain follows the laws of physics; it therefore follows that human creativity is already computational.

[–] patatas@sh.itjust.works -3 points 6 days ago (2 children)

Three problems with this:

  1. If computation means "anything that happens in the universe" then the term 'computation' is redundant and meaningless.
  2. We do not know or understand all of the physical laws of the universe, or if those laws indeed hold universally.
  3. Our consciousness does not operate at the level of atomic physics; see Daniel Dennett's 'compatibilism' defense of free will vs Robert Sapolsky's determinism. If we're vulgar materialists, then it follows that there is no free will, and thus no reason to advocate for societal change.
[–] jsomae@lemmy.ml 4 points 6 days ago* (last edited 6 days ago) (1 children)
  1. Your argument should not require appealing to desire to have the word computation be less redundant. (I don't really think there's a meaningful difference between computation and physics, we just generally use the term computation to refer to physical processes which result in useful information.) But why don't we define computation as being "anything that can be done on a conventional computer (with sufficient time and memory)" -- i.e. Turing-computable.
  2. It is not relevant that we may not know all the physical laws of the universe; what matters only is whether there are laws or not. A scientist cannot cause free will to disappear from the universe simply by learning new facts about the laws of physics. (I would argue that if this were apparently true, then there was no free will to begin with.)
  3. My understanding of compatabilism is that free will and determinism are compatible; in other words, the laws of physics can give arise to free will (consciousness, as you put it). I think there are some additional twists in compatabilism I don't entirely understand, but that's the gist as far as I have seen. In any case, compatabilism seems to me to be compatible with the idea that one can simulate a human brain; since the simulation and the original would produce the same result, then if one has free will, the other must have free will too. ~(Simulating~ ~it~ ~multiple~ ~times~ ~will~ ~always~ ~result~ ~in~ ~the~ ~same~ ~thing,~ ~which~ ~therefore~ ~means~ ~that~ ~it's~ ~the~ ~same~ ~conscious~ ~experience~ ~--~ ~the~ ~same~ ~free~ ~will~ ~--~ ~each~ ~time,~ ~and~ ~not~ ~different~ ~instances~ ~of~ ~free~ ~will.~ ~In~ ~other~ ~words,~ ~consciousness~ ~is~ ~fungible~ ~with~ ~respect~ ~to~ ~simulation.)~ Simulation=computation, so therefore human creativity is computable.

Please note that I'm not arguing that current AIs actually are on the level of human creativity, just that there's no law against that eventually being possible.

[–] patatas@sh.itjust.works 0 points 6 days ago (1 children)

The fact that we do not know or understand all the laws of physics (and again, if these are even indeed universal!) means that we cannot be certain about equating computation and physics - assuming we define computation as deterministic, as you seem to be doing here.

Can you 'simulate' a human brain? Sure, easy, all you have to do is just build a human brain out of DNA and proteins and lipids and water and hormones etc, and put it in an exact replica of a human body built from that same stuff.

We have no evidence that consciousness can be separated from the material body that gives rise to it!

And even if we try to abstract that away and say "let's just model the entire physical brain & body digitally": that brain & body is not an island; it's constantly interacting with the entirety of the rest of the physical world.

So, you want to 'simulate' a brain with ones and zeroes? You'll need to simulate the entire universe too. That's likely to be difficult, unless you have an extra universe worth of material to build that computational device with.

[–] jsomae@lemmy.ml 1 points 6 days ago* (last edited 6 days ago) (1 children)

Okay, I agree that the universe may not be Turing-computable, since we don't know the laws of physics. Indeed, it almost certainly isn't, since Turing machines are discrete and the universe is continuous -- there are integrals, for instance, that have no closed-form, but are physically present in our universe. However, I have no particularly good reason to believe that infinite precision is actually necessary in order to accurately simulate the human brain, since we can get arbitrarily close to an exact simulation of, say, Newtonian physics, or quantum physics minus gravity, using existing computers -- by "arbitrarily close," I mean that for any desired threshold of error, there exists some discretization constant for which the simulation will remain within that error threshold.

Sure, maybe there are more laws of the universe we don't know and those turn out to be necessary for the human brain to work. But it seems quite unlikely, as we already have a working reductionist model of the brain -- it seems like we understand how all the component parts, like neurons and such, work, and we can even model how complex assemblages of neurons can compute interesting things. Like we've trained actual rat neurons to play Doom for some ungodly reason, and they obey according to how our models predict. Yeah, maybe there's some critical missing law of physics, but the current model we have seems sufficient so far as we can tell in order to model the brain.

constantly interacting with the rest of the physical world

I feel like the rest of the world shouldn't actually matter for the purposes of free will. I mean, yes, obviously our free will responds to the environment. But if the environment disappeared, our free will shouldn't disappear along with it. In other words, the free will should be either entirely located in the mind, or if you're not a compatabilist/materialist, it's located in the mind plus some other metaphysical component. So, I don't agree that it requires simulating the whole universe in order to simulate a free will (though I do agree that you can't simulate an actual mind in the real world unless you can simulate all its inputs, e.g. placing the mind in some kind of completely walled-off sensory deprivation environment that has within-epsilon-of-zero interaction with the outside world. Obviously, it's not very practical, but for a thought experiment about free will I don't think this detail really matters.)

[–] patatas@sh.itjust.works 0 points 6 days ago (1 children)

So would you agree that people should be locked up for crimes that a sufficiently advanced AI system predicts they will commit?

Or would you agree that these systems cannot calculate human behaviour?

[–] jsomae@lemmy.ml 1 points 6 days ago* (last edited 6 days ago)

Hahaha, I didn't expect that.

I saw Minority Report, and I think it has a plot hole. If you can see the future then you can change it, meaning that if there is any way to relay information from the oracle to the person who would commit the crime, then that could change whether or not the person will commit the crime.

[–] sukhmel@programming.dev 3 points 6 days ago (1 children)

If we're vulgar materialists, then it follows that there is no free will, and thus no reason to advocate for societal change.

No free will doesn't imply no change. Lifeless systems evolve over time, take rock formation as an example, it was all cosmic dust at some point. So no, even if we do accept that there is no free will that shouldn't mean perfect stasis

[–] patatas@sh.itjust.works -1 points 6 days ago

I never said that no change would occur. I said there was no season to advocate for it if there is no free will.

[–] GaMEChld@lemmy.world -2 points 1 week ago

If I saw the artwork myself and it inspired my artwork, would it be any different? Everything is based on everything.

[–] sanguinepar@lemmy.world 9 points 1 week ago (1 children)

Yeah, but if you drew it yourself then they wouldn't expect to be paid. Unless you plagiarised them to the degree that would trigger a copyright claim, they would (at worst) just see it as a job that they could have had, but didn't. Nothing of theirs was directly used, and at least something original of theirs was created. Whereas AI images are wholly based on other work and include no original ideas at all.

[–] jsomae@lemmy.ml 9 points 1 week ago (1 children)

You're posting on lemmy.ml; we don't care much for intellectual property rights here. What we care about is that the working class not be deprived of their ability to make a living.

[–] sanguinepar@lemmy.world 2 points 1 week ago (1 children)

Agree with that. I don't think the two are mutually exclusive though?

[–] jsomae@lemmy.ml 3 points 6 days ago

I agree that they are not mutually exclusive, which is why I usually side against AI. On this particular occasion however, there's a palpable difference, since no artist is materially harmed.

[–] rumba@lemmy.zip 2 points 1 week ago (1 children)

Real artists use uncited reference art all the time. That person that drew a picture of Catherine the Great for a video game certainly didn't list the artist of the source art they were looking at when they drew it. No royalties went to that source artist. People stopped buying reference art books for the most part when Google image search became a thing.

A hell, a lot of professional graphic artists right now use AI for inspiration.

This isn't to say that the problem isn't real and a lot of artists stand to lose their livelihood over it, but nobody's paying someone to draw a banner for this forum. The best you're going to get is some artist doing out of the goodness of their heart when they could be spending their time and effort on a paying job.

[–] sanguinepar@lemmy.world 15 points 6 days ago* (last edited 6 days ago) (2 children)

Real artists may be influenced, but they still put something of themselves into what they make. AI only borrows from others, it creates nothing.

I realise no-one is paying someone to make a banner for this forum, it would need to be someone choosing to do it because they want there to be a banner. But the real artists whose work was used by the AI to make the banner had no choice in the matter, let alone any chance of recompense.

[–] FauxLiving@lemmy.world -3 points 6 days ago (2 children)

AI only borrows from others, it creates nothing.

This isn't an argument, it's pseudophilosophical nonsense.

But the real artists whose work was used by the AI to make the banner had no choice in the matter, let alone any chance of recompense.

In order to make such a statement you must:

  1. Know what model was used and;
  2. Know that it was trained on unlicensed work.

So, what model did the OP use?

I mean, unless you're just ignorantly suggesting that all diffusion models are trained on unlicensed work. Something that is demonstratively untrue: https://helpx.adobe.com/firefly/get-set-up/learn-the-basics/adobe-firefly-faq.html

Your arguments havent been true since the earliest days of diffusion models. AI training techniques are at the point where anybody with a few thousand images, a graphics card and a free weekend can train a high quality diffusion model.

It's simply ignorance to suggest that any generated image is using other artist's work.

[–] BennyTheExplorer@lemmy.world 10 points 6 days ago (1 children)

Nope, you can't train a good diffusion model from scratch with just a few thousand images, that is just delusion (I am open for examples though). Adobe Firefly is a black box, so we can't verify their claims, obviously they wouldn't admit, if they broke copyright to train their models. We do however have strong evidence, that google, openai and stability AI used tons of images, which they had no licence to use. Also, I still doubt that all of the people, who sold on Adobe Stock either knew, what their photos are gonna be used for or explicitly wanted that or just had to accept it to be able to sell their work.

Great counterargument to my first argument by the way πŸ‘

[–] Bytemite@lemmy.world 1 points 6 days ago

Firefly was found to use suspect training data too though... It's the best of them in that it's actually making an effort to ethically source the training data, but also almost no one uses it because programs from professional adobe suite are expensive as hell.

https://martech.org/legal-risks-loom-for-firefly-users-after-adobes-ai-image-tool-training-exposed/

[–] rumba@lemmy.zip -5 points 6 days ago (3 children)

So what's the solution for this board, they should just put up a black image? Should they start a crowdfunding to pay an artist?

It's a really bothers an artist enough they could make a banner for the board and ask them to swap out the AI. But, they'll have to make something that more people like than the AI.

[–] petrol_sniff_king@lemmy.blahaj.zone 9 points 6 days ago (1 children)

Considering AI is really unlikeable, I don't think that'll be too hard.

[–] rumba@lemmy.zip 1 points 6 days ago

Proof is when it happens.

[–] patatas@sh.itjust.works 9 points 6 days ago

The banner could be anything or nothing at all, and as long as it isn't AI generated, I would like it better

[–] supersquirrel@sopuli.xyz 5 points 6 days ago (1 children)

But, they’ll have to make something that more people like than the AI.

No, it does not have to be better than the AI image to be preferable.

[–] rumba@lemmy.zip -1 points 6 days ago

Okay, we have your vote down now think about the other people that are also here. It needs to be preferable to the majority not just you.