this post was submitted on 05 Aug 2025
773 points (99.6% liked)
Technology
73740 readers
4437 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If anyone has specific questions about this, let me know, and I can probably answer them. Hopefully I can be to Lemmy and Wikimedia what Unidan was to Reddit and ecology before he crashed out over jackdaws and got exposed for vote fraud.
Well now I want to know about jackdaws and voter fraud
https://en.wikipedia.org/wiki/Unidan
what about the jackdaws thing?
https://www.reddit.com/r/SubredditDrama/comments/2c31hk/unidan_gets_mad_about_crows_and_jackdaws_in_an/
unzips
Is there a danger that unscrupulous actors will try and build out a Wikipedia edit history with this and try to mass skew articles with propaganda using their "trusted" accounts?
Or what might be the goal here? Is it just stupid and bored people?
So Wikipedia has three methods for deleting an article:
This new criterion has nothing to do with preempting the kind of trust building you described. The editor who made it will not be treated any differently than without this criterion. It's there so editors don't have to deal with the bullshit asymmetry principle and comb through everything to make sure it's verifiable. Sometimes editors will make these LLM-generated articles because they think they're helping but don't know how to do it themselves, sometimes it's for some bizarre agenda (e.g. there's a sockpuppet editor who's been occasionally popping up trying to push articles generated by an LLM about the Afghan–Mughal Wars), but whatever the reason, it just does nothing but waste other editors' time and can be effectively considered unverified. All this criterion does is expedite the process of purging their bullshit.
I'd argue meticulously building trust to push an agenda isn't a prevalent problem on Wikipedia, but that's a very different discussion.
Thank you for your answer, I really feel happy that Wikipedia is safe then. Stuff happening nowadays makes me always think of the worst.
Do you think your problem is similar to open-source developers fighting AI pull requests? There it was theorised that some people try to train their models by making them submit code changes and abuse the maintainers' time and effort to get training data.
Is it possible that this is an effort to steal work from Wikipedia editors to get you to train their AI models?
I can't definitively say "no", but I've seen no evidence of this at all.
How frequently are images generated/modified by diffusion models uploaded to Wikimedia Commons? I can wrap my head around evaluating cited sources for notability, but I don't know where to start determining the repute of photographs. So many images Wikipedia articles use are taken by seemingly random people not associated with any organization.
So far, I haven't seen all that many, and the ones that are are very obvious like a very glossy crab at the beach wearing a Santa Claus hat. I definitely have yet to see one that's undisclosed, let alone actively disguising itself. I also have yet to see someone try using an AI-generated image on Wikipedia. The process of disclaiming generative AI usage is trivialized in the upload process with an obvious checkbox, so the only incentive not to is straight-up lying.
I can't say how much this will be an issue in the future or what good steps are to finding and eliminating it should it become one.
How would you know if an image is AI generated? That was easy to do in the past, but have you seen what they are capable of now?
How do I get started on contributing to new articles (written by a human) for my language? I always wanted to help out but never found an easy way to do so.
I'm going to write this from the perspective of the English Wikipedia, but most specifics should have some analog in other Wikipedias. By "contribute to new articles", do you mean create new articles, contribute to articles which are new that you come across, or contribute to articles which you haven't before (thus "new to you")? Asking because the first one has a very different – much more complicated – answer from the other two.
Both. How do I get started creating a new article, and how do I contribute to them, or other articles?
The short answer is that I really, really suggest you try other things before trying to create your first article. This isn't just me; every experienced editor will tell you that creating a new article is one of the hardest things any editor can do, let alone a newer one. It's why the task center lists it as being appropriate for "advanced editors". Finding an existing article which interests you and then polishing and expanding it is almost always more rewarding, more useful, easier, and less stressful than creating an article from scratch. And if creating articles sounds appealing, expanding existing stub articles is great experience for that.
The long answer is "you can", but it's really hard:
Some of these apply to normal editing too, but working within an article others have worked on and might be willing to help with is vastly easier than building one from scratch. If you want specific help in picking out, say, an article to try editing and are on the English Wikipedia, I have no problem acting like bowling bumpers if you're afraid your edits won't meet standards.
Unidan was a legend, he will be missed.