this post was submitted on 05 Aug 2025
812 points (99.5% liked)

Technology

74754 readers
2505 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] L3s@lemmy.world 0 points 3 weeks ago

Too many people being rude to eachother, locking it. Lets be better.

[–] 9point6@lemmy.world 114 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Oh for fuck's sake...

I'd not considered this was happening (people submitting AI wiki articles)

[–] AbidanYre@lemmy.world 62 points 4 weeks ago (2 children)

Isn't Wikipedia where AI gets like half of its information from anyway?

[–] Skua@kbin.earth 67 points 4 weeks ago (1 children)

Reddit seems to be a substantial source if the many bits of questionable advice that google famously offered are any indication

[–] Tollana1234567@lemmy.today 15 points 4 weeks ago

reddit allows GOOGLE to scrape it for its AI, because google allows them to use thier v3captcha for thier moderation and banning purposes.

[–] 9point6@lemmy.world 6 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

Do you think these people surreptitiously submitting articles written by AI are gonna be capable of validating what they're submitting is even true? Particularly if the (presumably effective) Wikipedia defense for this is detecting made up citations?

This kind of thing makes something valuable to everyone, like Wikipedia, ultimately a less valuable resource, and should be resisted and rejected by anyone with their head screwed on

[–] AbidanYre@lemmy.world 6 points 4 weeks ago

Oh, I think this is a good move by Wikipedia. I just hate to imagine the disaster that ouroboros of AI citing AI generated Wikipedia articles would come up with.

[–] unit327@lemmy.zip 67 points 4 weeks ago (45 children)

I downloaded the entirety of wikipedia as of 2024 to use as a reference for "truth" in the post-slop world. Maybe I should grab the 2022 version as well just in case...

load more comments (45 replies)
[–] AmidFuror@fedia.io 60 points 4 weeks ago (2 children)

The headline reflects a sensible move by Wikipedia to protect content quality. AI-generated articles often include errors or fake citations, so giving admins the authority to quickly delete such content helps maintain accuracy and credibility. While there's some risk of overreach, the policy targets misuse, not responsible AI-assisted editing, and aligns with Wikipedia’s existing standards for removing low-quality material.

[–] Endmaker@ani.social 55 points 4 weeks ago (3 children)

Did you generate this comment with a LLM for irony?

[–] antonim@lemmy.dbzer0.com 40 points 4 weeks ago (3 children)

Ha, fair question! But no irony here—I actually wrote it myself. That said, it's kind of funny how quickly we've reached the point where any well-written, balanced take sounds like it could be AI-generated. Maybe that's part of the problem we're trying to solve!

[–] Skua@kbin.earth 62 points 4 weeks ago (3 children)

But no irony here—I actually wrote it myself.

I see that em dash I know what you're doing

[–] antonim@lemmy.dbzer0.com 20 points 4 weeks ago (1 children)

It really is crazy how predictable it is.

[–] RisingSwell@lemmy.dbzer0.com 22 points 4 weeks ago (1 children)

Even saying fair question set off alarms. At this point saying anything good about a response at the start is immediate red flag.

load more comments (1 replies)
[–] Mac@mander.xyz 11 points 4 weeks ago (1 children)

I've started to drop using emdashes because AI ruined them--bastards.

load more comments (1 replies)
load more comments (1 replies)
[–] AbidanYre@lemmy.world 17 points 4 weeks ago

Username does not check out.

[–] DeathByBigSad@sh.itjust.works 9 points 4 weeks ago

It always feels weird when people write an essay as if this is their final quarter project for high school. Too neat, thoughts too organized, much flowery proses.

load more comments (1 replies)
[–] LyD@lemmy.ca 7 points 4 weeks ago
[–] TheTechnician27@lemmy.world 53 points 4 weeks ago* (last edited 4 weeks ago) (8 children)

If anyone has specific questions about this, let me know, and I can probably answer them. Hopefully I can be to Lemmy and Wikimedia what Unidan was to Reddit and ecology before he crashed out over jackdaws and got exposed for vote fraud.

[–] AwesomeLowlander@sh.itjust.works 22 points 4 weeks ago (2 children)

Well now I want to know about jackdaws and voter fraud

[–] HK65@sopuli.xyz 8 points 4 weeks ago (1 children)

Is there a danger that unscrupulous actors will try and build out a Wikipedia edit history with this and try to mass skew articles with propaganda using their "trusted" accounts?

Or what might be the goal here? Is it just stupid and bored people?

[–] TheTechnician27@lemmy.world 22 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

So Wikipedia has three methods for deleting an article:

  • Proposed deletion (PROD): An editor tags an article explaining why they think it should be uncontroversially deleted. After seven days, an administrator will take a look and decide if they agree. Proposed deletion of an article can only be done once, even this can be removed by anyone passing by who disagrees with it, and an article deleted via PROD can be recreated at any time.
  • Articles for deletion (AfD): A discussion is held to delete an article. Pretty much always, this is about the subject's notability. After the discussion (a week by default), a closer (almost always an administrator, especially for contentious discussions) will evaluate the merits of the arguments made and see if a consensus has been reached to e.g. delete, keep, redirect, or merge. Articles deleted via discussion cannot be recreated until they've satisfied the concerns of said discussion, else they can be summarily re-deleted.
  • Speedy deletion: An article is so fundamentally flawed that it should be summarily deleted at best or needs to be deleted as soon as possible at worst. The nominating editor will choose one or more of the criteria for speedy deletion (CSD), and an administrator will delete the article if they agree. Like a PROD, articles deleted this way can be recreated at any time.

This new criterion has nothing to do with preempting the kind of trust building you described. The editor who made it will not be treated any differently than without this criterion. It's there so editors don't have to deal with the bullshit asymmetry principle and comb through everything to make sure it's verifiable. Sometimes editors will make these LLM-generated articles because they think they're helping but don't know how to do it themselves, sometimes it's for some bizarre agenda (e.g. there's a sockpuppet editor who's been occasionally popping up trying to push articles generated by an LLM about the Afghan–Mughal Wars), but whatever the reason, it just does nothing but waste other editors' time and can be effectively considered unverified. All this criterion does is expedite the process of purging their bullshit.

I'd argue meticulously building trust to push an agenda isn't a prevalent problem on Wikipedia, but that's a very different discussion.

load more comments (2 replies)
[–] baltakatei@sopuli.xyz 7 points 4 weeks ago (1 children)

How frequently are images generated/modified by diffusion models uploaded to Wikimedia Commons? I can wrap my head around evaluating cited sources for notability, but I don't know where to start determining the repute of photographs. So many images Wikipedia articles use are taken by seemingly random people not associated with any organization.

[–] TheTechnician27@lemmy.world 8 points 4 weeks ago (1 children)

So far, I haven't seen all that many, and the ones that are are very obvious like a very glossy crab at the beach wearing a Santa Claus hat. I definitely have yet to see one that's undisclosed, let alone actively disguising itself. I also have yet to see someone try using an AI-generated image on Wikipedia. The process of disclaiming generative AI usage is trivialized in the upload process with an obvious checkbox, so the only incentive not to is straight-up lying.

I can't say how much this will be an issue in the future or what good steps are to finding and eliminating it should it become one.

load more comments (1 replies)
load more comments (5 replies)
[–] snf@lemmy.world 48 points 4 weeks ago

You know, I think I'm overdue for a donation to Wikipedia. They honestly might end up being the last bastion of sanity

[–] logicbomb@lemmy.world 23 points 4 weeks ago

They call the rule "LLM-generated without human review". The specific criteria are mistakes that LLMs frequently make.

[–] cupcakezealot@piefed.blahaj.zone 16 points 4 weeks ago

common wikipedia w

[–] pdxfed@lemmy.world 16 points 4 weeks ago (3 children)

It's a step. Why wouldn't they default to not accepting any AI generated content, and maybe have a manual approval process? It would both protect the content and discourage LLM uses where llms suck.

[–] OsrsNeedsF2P@lemmy.ml 15 points 4 weeks ago

Why wouldn’t they default to not accepting any AI generated content

If you can accurately detect what content is AI generated, you'll have a company worth billions overnight

load more comments (2 replies)
load more comments
view more: next ›