this post was submitted on 13 Nov 2025
569 points (96.0% liked)
Technology
76868 readers
2507 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
but what are the criteria? just because you think you have a handle on it doesn’t mean everyone else does or even shares your conclusion. and there’s no metric here i can measure, to for example block it from my platform.
The criteria is whatever you put in the “no ai” policy on the site. Whether that be ‘you can’t post videos wholly generated from a prompt’ to ‘you can’t post anything that uses any form of neural net in the production chain’ to something in between. You can specify what types are and are not included and blanket ban/allow everything else. It can definitely be defined in the user agreement, the part that’s actually hard would be detection/enforcement.
my point is that it’s hard to program someone’s subjective, if written in whatever form of legalese, point of view into a detection system, especially when those same detection systems can be used to great effect to train systems to bypass them. any such detection system would likely be an “AI” in the same way the ones they ban are and would be similarly prone to mistakes and to reflecting the values of the company (read: Jack Dorsey) rather than enforcing any objective ethical boundary.
Every single comment I said that detecting them would be the hard part, I’ve been talking about defining the type of content that is allowed/banned not the part where they actually have to filter it.
i guess the point that’s being missed is that when i say “hard” i mean practically impossible
Yeah I’m basically ignoring the part of implementing it as a separate issue from defining it, which is the part I’m saying is objective. Given a definition of what type of content they want to ban you should be able to figure out whether something you’re going to post is allowed or not, that’s why I’m saying it’s not subjective. Whether it can be detected if you post it anyways, would probably have to be based on reports, human reviewers and strict account bans if caught, with the burden of proof on the accused to prove it isn’t AI to have any chance of working at all. This would get abused, and be super obnoxious (and expensive) but it would probably work to a point.