this post was submitted on 13 Nov 2025
569 points (96.0% liked)

Technology

76868 readers
1659 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] henfredemars@lemdro.id 238 points 3 days ago (5 children)

Enforcing that ban is going to be difficult.

[–] scytale@piefed.zip 149 points 3 days ago (4 children)

They’re gonna use AI to detect the use of AI.

[–] henfredemars@lemdro.id 87 points 3 days ago (3 children)

Not if I use AI to hide my use of AI first!

[–] SeductiveTortoise@piefed.social 34 points 3 days ago (3 children)

They are going to implement an AI detector detector detector.

[–] henfredemars@lemdro.id 23 points 3 days ago (1 children)

It’s detectors all the way down.

But of course, as a tortoise, you would know that.

[–] SeductiveTortoise@piefed.social 11 points 3 days ago (1 children)

Did I seduce you a lot with my hard, polished shell? You can speak freely, nobody will ever know!

[–] cecilkorik@piefed.ca 11 points 3 days ago (1 children)

No, it was your intense, Gowron-like stare that truly drilled into my heart.

[–] SeductiveTortoise@piefed.social 10 points 3 days ago

Glory to seduction! Glory to the empire!

But then they'll implement an AI detector detector deflector.

[–] gerryflap@feddit.nl 12 points 3 days ago

Fun fact, this loop is kinda how one of the generative ML algorithms works. This algorithm is called Generative Adversarial Networks or GAN.

You have a so-called Generator neural network G that generates something (usually images) from random noise and a Discriminator neural network D that can take images (or whatever you're generating) as input and outputs whether this is real or fake (not actually in a binary way, but as a continuous value). D is trained on images from G, which should be classified as fake, and real images from a dataset that should be classified as real. G is trained to generate images from random noise vectors that fool D into thinking they're real. D is, like most neural networks, essentially just a mathematical function so you can just compute how to adjust the generated image to make it appear more real using derivatives.

In the perfect case these 2 networks battle until they reach peak performance. In practice you usually need to do some extra shit to prevent the whole situation from crashing and burning. What often happens, for instance, is that D becomes so good that it doesn't provide any useful feedback anymore. It sees the generated images as 100% fake, meaning there's no longer an obvious way to alter the generated image to make it seem more real.

Sorry for the infodump :3

[–] danc4498@lemmy.world 5 points 3 days ago

Spoken like a true AI.

[–] boonhet@sopuli.xyz 11 points 3 days ago (1 children)

Well the AI-based AI detector isn't actively making creative people's work disappear into a sea of gen-AI "art" at least.

There's good and bad use cases for AI, I consider this a better use case than generating art. Now the question is whether or not it's feasible to detect AI this way.

[–] TheGrandNagus@lemmy.world 4 points 3 days ago

Indeed.

I have an Immich instance running on my home server that backs up my and my wife's photos. It's like an open source Google Photos.

One of its features is an local AI model that recognises faces and tags names on them, as well as doing stuff like recognising when a picture is of a landscape, food, etc.

Likewise, Firefox has a really good offline translation feature that runs locally and is open source.

AI doesn't have to be bad. Big tech and venture capital is just choosing to make it so.

[–] LainTrain@lemmy.dbzer0.com 4 points 3 days ago* (last edited 3 days ago)
[–] Krompus@lemmy.world 3 points 3 days ago
[–] nondescripthandle@lemmy.dbzer0.com 27 points 3 days ago (1 children)

Just the threat of being able to summarily remove AI content and hand out account discipline will cut down drastically on AI and practically eliminate the really low effort 'slop', it's not perfect but it's damn useful.

[–] FaceDeer@fedia.io 13 points 3 days ago (2 children)

It's also going to make it really easy to take down the content you don't like, just accuse it of being AI and watch the witch hunting roll in. I've seen plenty of examples of traditional artists getting accused of using AI in other forums, I don't imagine this will be any different.

[–] oftenawake@lemmy.dbzer0.com 2 points 2 days ago

I got accused of being an AI for writing a comment reply to someone which was merely informative, empathic and polite!

[–] nondescripthandle@lemmy.dbzer0.com 2 points 3 days ago* (last edited 3 days ago)

People already mass report to abuse existing AI moderation tools. It's already starting to be accounted for and I can't imagine it so much as slowing down implementing an anti AI rule if I'm being honest.

The ban doesn't need a 100% perfect AI screening protocol to be a success.

Just the fact that AI is banned might appeal to a wide demographic. If the ban is actually enforced, even in just 25% of the most blatant cases, it might be just the push a new platform needs to take off.

[–] osaerisxero@kbin.melroy.org 2 points 3 days ago (2 children)

Only if we let it be. There's no technical reason why the origin of a video couldn't have a signature generated by the capture device, or legally requiring AI models to do the same for any content they generate. Anything without an origin sticker is assumed to be garbage by default. Obviously there would need to be some way to make captures either anonymous or not at the user's choice, and nation states can evade these things with sufficient effort like they always do, but we could cut a lot of slop out by doing some simple stuff like that.

[–] kinsnik@lemmy.world 2 points 3 days ago

while a phone signing a video to show that it was captured with the camera is possible, it will be easy too to fake the signature. all it would take would be a hacked device to steal the private key. and even if apple/google/samsung have perfectly secure systems to sign the origin of the video, there would be ton of cheaper phones that would likely won't.

[–] pennomi@lemmy.world 2 points 3 days ago

“Legally” doesn’t mean shit if it’s not enforceable. Besides, removing watermarks is trivial.

There is no technically rigorous way to filter AI content, unfortunately.

[–] edryd@lemmy.world 1 points 3 days ago (1 children)

Just because something might be hard means we should give up before even trying?

[–] Rothe@piefed.social 1 points 3 days ago

Of course not, but I don't think anybody suggested that.