this post was submitted on 28 Mar 2025
127 points (92.1% liked)

Ask Lemmy

30556 readers
1436 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I have noticed that lemmy so far does not have a lot of fake accounts from bots and AI slop at least from what I can tell. I am wondering how the heck do we keep this community free of that kind of stuff as continuous waves of redditors land here and the platform grows.

EDIT a potential solution:

I have an idea where people can flag a post or a user as a bot and if it's found out to be a bot the moderators could have some tool where the bot is essentially shadow banned into an inbox that just gets dumped occasionally. I am thinking this because then people creating the bots might not realize their bot has been banned and try and create replacement bots. This could effectively reduce the amount of bots without bot creators realizing it or know if their bots have been blocked or not. The one thing that would also be needed is a way to request being un-bannned if they get hit as a false positive. these would have to be built into lemmy's moderation tools and I don't know if any of that exists currently.

you are viewing a single comment's thread
view the rest of the comments
[–] Squorlple@lemmy.world 14 points 4 days ago* (last edited 4 days ago) (2 children)

Re: bots

If feasible, I think the best option would be an instance that functions similarly to how Reddit’s now defunct r/BotDefense operated and instances which want to filter out bots would federate with that. Essentially, if there is an account that is suspect of being a bot, users could submit that account to this bot defense server and an automated system would flag obvious bots whereas less obvious bots would have to be inspected manually by informed admins/mods of the server. This flagging would signal to the federated servers to ban these suspect/confirmed bot accounts. Edit 1: This instance would also be able to flag when a particular server is being overrun by bots and advise other servers to temporarily defederate.

If you are hosting a Lemmy instance, I suggest requiring new accounts to provide an email address and pass a captcha. I’m not informed enough with the security side of things to suggest more, but https://lemmy.world/c/selfhosted or the admins of large instances may be able to provide more insight for security.

Edit 2: If possible, an improved search function for Lemmy, or cross-media content in general, would be helpful. Since this medium still has a relatively small userbase, most bot and spam content is lifted from other sites. Being able to track where bots’ content is coming from is extremely helpful to conclude that there is no human curating their posts. This is why I’m wary of seemingly real users on Lemmy who do binge spam memes or other non-OC. Being able to search for a string of text, search for image sources/matching images, being able to search for strings of text within an image, and being able to find original texts that a bot has rephrased are on my wishlist.

Re: AI content

AFAIK, the best option is just to have instance/community rules against it if you’re concerned about it.

The best defense against both is education and critical examination of what you see online.

[–] ptz@dubvee.org 7 points 4 days ago* (last edited 4 days ago)

If you are hosting a Lemmy instance, I suggest requiring new accounts to provide an email address and pass a captcha

Those are easy to bypass (or a human can spin up a bunch with throwaway emails and plug them into bots). I recommend enabling registration applications. While not foolproof, it gives the admins eyes on every new account. Also, consider denying any application that uses a throwaway email service.

[–] SorteKanin@feddit.dk 3 points 3 days ago (1 children)

If you are hosting a Lemmy instance, I suggest requiring new accounts to provide an email address and pass a captcha.

The captchas are ridiculously ineffective and anyone can get dummy emails. Registration applications is the only way to go.

[–] real_squids@sopuli.xyz 2 points 3 days ago

Plenty of websites filter out dummy email generators, could do the same in addition to applications. Making a drawing of something specific, but random (think of a list of a dozen or two images gen-ai gets wrong) could be a captcha replacement.