ClamDrinker

joined 2 years ago
[–] ClamDrinker@lemmy.world 6 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Mailroom aside, if a delivery guy is fine crossing a city with 20/30k people horizontally in traffic, I don't really see why this is such a bad thing when you break it down.

I count 35 floors, so you can cut it down to ~850 people on each floor after an elevator ride, and a building like this will probably have at least 4 elevator areas sectioning the building almost equally.

So you're down to about ~210 people after entering the right side of the building, that's like a big street / small neighborhood (and how far you have to walk should scale closely to that). And with this much people in one area you can really easily batch deliveries. And a delivery place will probably settle quite closely to such a hub of people anyways.

[–] ClamDrinker@lemmy.world 1 points 1 month ago

Not completely true. It just needs to be data that is organic enough. Good AI generated material is fine for reinforcement since it is still material (some) humans would be fine seeing. So more like: it needs to be human approved.

[–] ClamDrinker@lemmy.world 2 points 1 month ago

There's really no good way - if you act normal they train on you, and if you act badly they train on you as an example of what to avoid.

My recommendation: Make sure its really hard for them to guess which you are so you hopefully end up in the wrong pile. Use slang they have a hard time pinning down, talk about controversial topics, avoid posting to places easily scraped and build spaces free from bot access. Use anonimity to make you hard to index. Anything you post publicly can be scraped sadly, but you can make it near unusable for AI models.

[–] ClamDrinker@lemmy.world 1 points 2 months ago

You are probably confusing fine tuning with training. You can fine tune an existing model to produce more output in line with sample images, essentially embedding a default "style" into every thing it produces afterwards (Eg. LoRAs). That can be done with such a small image size, but it still requires the full model that was trained on likely billions of images.

[–] ClamDrinker@lemmy.world 6 points 3 months ago (1 children)

There's always just people that mess up on the form. But they also monitor the sign rate and saw some periods of higher than normal signing in the middle of the night in the EU - indicating someone might have ran a bot to sign with invalid information. The EU only validates the signatures once the petition is closed, so they need a safe margin where even with a significant amount of invalid signatures, they still make it. Afaik 1.2 mil is about what they would expect for a normal vote of this size to be safe, and 1.4 mil is basically more than enough to compensate for any bad actors.

[–] ClamDrinker@lemmy.world 9 points 3 months ago (3 children)

Ross explained it in his last video - there are reasons to be skeptical and unsure if it's truly there until at least 1.4 mil signatures. And more votes is never bad. So both need more attention. If it reaches people in the EU it will also reach those in the UK.

[–] ClamDrinker@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

This so very much. I've been saying it since 2020. People who think the big corporations (even the ones that use AI), aren't playing both sides of this issue from the very beginning just aren't paying attention.

It's in their interest to have those positive to AI defend them by association by energizing those negative to AI to take on an "us vs them" mentality, and the other way around as well. It's the classic divide and conquer.

Because if people refuse to talk to each other about it in good faith, and refuse to treat each other with respect, learn where they're coming from or why they hold such opinions, you can keep them fighting amongst themselves, instead of banding together and demanding realistic, and fair policies in regards to AI. This is why bad faith arguments and positions must be shot down on both the side you agree with and the one you disagree with.

[–] ClamDrinker@lemmy.world 1 points 3 months ago

A court will decide such cases. Most AI models aren't trained for this purpose of whitewashing content even if some people would imply that's all they do, but if you decided to actually train a model for this explicit purpose you would most likely not get away with it if someone dragged you in front of a court for it.

It's a similar defense that some file hosting websites had against hosting and distributing copyrighted content (Eg. MEGA), but in such cases it was very clear to what their real goals were (especially in court), and at the same time it did not kill all file sharing websites, because not all of them were built with the intention to distribute illegal material with under the guise of legitimate operation.

[–] ClamDrinker@lemmy.world 4 points 4 months ago* (last edited 4 months ago) (1 children)

Just a small correction in case you didn't know, but your answer shows as 432*1 because Lemmy formats text wrapped by * as italic, so it thinks you want to italicize the 3. You meant to write 4*3*2*1 (written as 4\*3\*2\*1). This is because \ is an escape character that tells lemmy not to take the * as a formatting character.

[–] ClamDrinker@lemmy.world 4 points 4 months ago (1 children)

Can I add 4. the integrated video downloader actually downloads videos, in whatever format you would want, and with no internet connection required to watch them. This to me is still the biggest scam 'feature' of Youtube Premium. You can '''download''' videos, but not as eg. an mp4, but as an encrypted file only playable inside the Youtube app, and only if you connected to the internet in the last couple of days can you play it.

That's not downloading, that's just jacking my disk space to avoid buffering the video from Youtube's servers. That's not a feature, that's me paying for Youtube's benefit.

I cancelled and haven't paid for Premium in years because of it. When someone scams me out of features I paid for, I don't reward that shit until they either stop lying in their feature list, or actually start delivering.

[–] ClamDrinker@lemmy.world 4 points 5 months ago* (last edited 5 months ago)

It really depends. There's some good uses, but it requires careful consideration and understanding of what the technology can actually provide. And if for your use case there isnt anything, it's just not what you should use.

Most if not all of the bigger companies that push it dont really try to use it for those purposes, but instead treat it as the next big thing that nobody quite understands, building mostly on hype. But smaller companies and open source initiatives indeed try to make the good uses more accessible and less objectionable.

There's plenty of cases where people do nifty things that have positive outcomes. Researchers using it for pattern recognition, scambait chatbots, creative projects that try to make use of the characteristics of AI different from human creations, etc.

I like to keep an open mind as to what people come up with, rather than dismissing it outright when AI is involved. Although hailing it as an AI product is a red flag for me if thats all thats advertised.

[–] ClamDrinker@lemmy.world 8 points 5 months ago* (last edited 5 months ago)

It also very much depends on your country, food authority, and retailer. Some food authorities have stricter categories for very perishable foods where unless it has gone very bad, you can't see it's not suitable for consumption anymore, eg. meat and vegetable. And while the producer has an incentive to encourage waste, the retailer has the incentive to reduce it, as you typically can't sell items to consumers that are no longer within date (Again, depending on your location). If an item is unreasonably often thrown out by the retailer, that leads to consequences in the deals being made between the retailer and the producer, which pushes the producer not to be too inaccurate either.

view more: next ›