this post was submitted on 29 Oct 2025
64 points (94.4% liked)

Ask Lemmy

35334 readers
3814 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Well, every website I find is either crashing either not working on mobile.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] turdas@suppo.fi 10 points 2 days ago* (last edited 2 days ago) (1 children)

Not an expert on this topic but I've read about it a fair bit and tinkered around with image generators:

You don't post them, basically. Unfortunately nothing else will really work in the long term.

There are various tools -- Glaze is the first one I can think of -- that try to subtly modify the pixels in the image in a way that is imperceptible to humans but causes the computer vision part of image generator AIs (the part that, during the training process, looks at an image and produces a text description of what is in it) to freak out and become unable to understand what is in the image. This is known as an adversarial attack in the literature.

The intention of these tools is to make it harder to use the images for training AI models, but there are several caveats:

  • Though they try to be visually undetectable to humans, they can still create obviously visible artifacts, especially on higher strength levels. This is especially noticeable on hand-drawn illustrations, less so on photographs.
  • Lower strength levels with fewer artifacts are less effective.
  • They can only target existing models, and even then won't be equally effective against all of them.
  • There are ways of mitigating or removing the effect, and it will likely not work on future AI models (preventing adversarial attacks is a major research interest in the field).

So the main thing you gain from using these is that it becomes harder for people to use your art for style transfer/fine-tuning purposes to copy your specific art style right now. The protection has an inherent time limit in it because it relies on a flaw in the AI models, which will be fixed in the future. Other abusable flaws will almost certainly remain and be discovered after the ones currently used are fixed, but the art you release now obviously cannot be protected by techniques that do not yet exist. It will be a cat-and-mouse game, and one where the protection systems play the role of the cat.

Anyway, if you want to try it, you can find the aforementioned Glaze at https://glaze.cs.uchicago.edu/. You may want to read one of their recent updates, which discusses at greater length the specific issue I bring up here, i.e. the AI models overcoming the adversarial attack and rendering the protection ineffective, and how they updated the protection to mitigate this: https://glaze.cs.uchicago.edu/update21.html

[โ€“] BaroqueInMind@piefed.social 7 points 2 days ago* (last edited 2 days ago)

Recently, Glaze is now known to be easily bypassed with trivial effort on most available commercial (and also most free self-hosted) Diffuser models.