this post was submitted on 09 Dec 2025
178 points (99.4% liked)

Ask Lemmy

35950 readers
1823 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

As some of you may be aware, over the past few weeks there have been an increasing number of what I suspect are bots which will share one or even a few posts, all relevant to the communities they are shared in, at which point the account self-deletes. I’m torn as the stories are relevant, but they give me the impression of a narrative attack. I’ve seen only one of these accounts actually comment before deletion, otherwise they post and immediately nuke the account.

I have tagged mods and admins but have not heard any recognition on the problem. It’s also notable that by my impression this issue is getting worse. I noticed yesterday that communities I subscribe to which previously have not had this problem are now starting to receive these kinds of posts.

I want the fediverse to be a place to communicate with real people in good faith; this manner of posting runs contrary to that. So that begs the questions, is this actually a problem, and if so, what can be done about it?

all 38 comments
sorted by: hot top controversial new old
[–] m_f@discuss.online 101 points 10 hours ago (4 children)

Lemmy just released 0.19.14, which addresses this somehow, but the announcement is vague:

https://join-lemmy.org/news/2025-12-08_-_Lemmy_Release_0.19.14

https://discuss.online/post/31855056

Recently some malicious users started to use an exploit where they would post rule violating content and then delete the account. This would prevent admins and mods from viewing the user profile to find other posts, and would also prevent federation of ban actions.

The new release fixes these problems. Thanks to @flamingos-cant for contributing to solve this.

[–] ptz@dubvee.org 50 points 10 hours ago (1 children)

Good to know.

I think this just fixes the bug where deleted accounts were invisible to admins. It's a start but doesn't fully address the problem. Still, having it federate the content removal is a step in the right direction.

[–] halcyoncmdr@lemmy.world 26 points 9 hours ago

It's a start but doesn't fully address the problem.

Eh, I'd say it addresses everything that matters.

The root of the problem was that deleting the account was an exploit to avoid limit admin research and further actions, and federation of content removal. That's the only reason they were bothering to do it. The fix allows admins to research properly, and for federation of removal actions.

It doesn't solve the root of the issue with bad actors, but that's a much larger issue well beyond the scope of a couple bug fixes.

[–] Skavau@piefed.social 18 points 10 hours ago (1 children)

This seems to be dealing with the issue of finding them after the fact, rather than just automatically purging the posts. So it does help, but the best solution here is to just automate it.

[–] blarghly@lemmy.world 5 points 8 hours ago (1 children)

How do you automate removing rule-breaking posts?

[–] Skavau@piefed.social 24 points 8 hours ago

It's on Piefed. Piefed just automatically removes all posts by accounts less than 24 hours old that self-delete.

[–] NOT_RICK@lemmy.world 11 points 10 hours ago

That’s encouraging!

[–] Flax_vert@feddit.uk 3 points 2 hours ago* (last edited 2 hours ago)

@flamingos@feddit.uk thank you always for making the fediverse a safer place!

[–] Skavau@piefed.social 56 points 10 hours ago* (last edited 10 hours ago) (1 children)

Piefed already automatically has a toggle in its settings that can be activated. On piefed.social any account that self-deletes within 24 hours has all of its posts purged.

This is a lemmy problem.

[–] NOT_RICK@lemmy.world 14 points 10 hours ago

Good to know, I need to suck it up and just move over to my piefed alt full time.

[–] ptz@dubvee.org 24 points 10 hours ago* (last edited 10 hours ago) (1 children)

They've also been throwing out rage bait in the "YSK" community. But yeah, I definitely agree it's a problem. So much so that I've added some built-in filters for that in Tesseract (client-side) to deal with it. I don't want to go into the technical details for fear of that jackass trying to counter them, but suffice it to say they're effective based on their current M.O.. I just need to finish this release and get it pushed out as it's been approaching vaporware status over the last 1-2 months.

I think Piefed is also taking action in either its UI or backend. I don't recall exactly what, but I saw something mentioned a week or two ago on another post complaining about that self-deleting spammer account.

[–] misericordiae@literature.cafe 9 points 9 hours ago

Tesseract is my favorite UI; thank you for continuing to work on it!

[–] lowspeedchase@lemmy.dbzer0.com 23 points 10 hours ago (1 children)

I want the fediverse to be a place to communicate with real people in good faith

It's a popularity problem. Once a 'social network' gets big enough to spread information to a decent user base, it will be astroturfed. Building a better mousetrap and all that jazz, they will still get through, it doesn't matter the mitigation strategy you use. The best defense is to engage with others who are respectful, and not engage with those that are clearly rage baiting, circle jerking, promoting a narrative, etc. You will never not see it, the trick is to realize it doesn't matter at all. Just do you.

[–] Lasherz12@lemmy.world 22 points 10 hours ago (1 children)

From the mod perspective I can certainly tell you it's an issue, however the risk of making a bad call on a new user exceeds the risk of letting one through, which will self delete anyways. It seems like an admin would need to solve it through any of a bunch of methods such as allowing posts to stay up past an account's deletion, restricting new users from posting, requiring a certain amount of community engagement first, figuring out their script to autoban, or some better solution I haven't thought of.

[–] SpikesOtherDog@ani.social 7 points 7 hours ago (2 children)

What about putting a delay on the beginning activity of an account? Maybe a 2-4 hour timer on new accounts where their new posts are only available to certain users. Once the account has matured, the restrictions can be lifted.

[–] Lasherz12@lemmy.world 7 points 6 hours ago (1 children)

I think the admins have a lot of ways to approach it, the question is the applicability to the problem, which can quickly change as bots adjust their approach, whether it will affect regular users negatively, and how it herds scripts into patterns that are immediately recognizable when it doesn't fully work.

[–] SpikesOtherDog@ani.social 5 points 6 hours ago

Fair, and I was considering that. This could be reviewed with heuristics and instead of instant bans, apply a review. If the admins don't respond, then it's not addressed.

[–] jordanlund@lemmy.world 5 points 4 hours ago

That would be a fun way to implement a quarantine... Posts and comments by new users are only available to other new users. 😉

Kind of like the "Hide Bots" toggle. "Hide New Users".

As far as THEY know, they're active participants.

[–] Rhynoplaz@lemmy.world 18 points 9 hours ago (1 children)

I've definitely noticed it, but I don't understand why.

I know karma farmers on Reddit would sell accounts or just try to create the appearance of legitimacy, but what does it accomplish to delete an account immediately after posting?

[–] Skavau@piefed.social 21 points 9 hours ago (2 children)

The person in question here is permanently banned from the Fediverse (effectively ban on sight for most instances) in part for spamming, but also because of maladaptive personality traits. They don't accept that and instead still wish to "help" the fediverse via providing content (news spam)

[–] FoxyFerengi@startrek.website 9 points 5 hours ago (1 children)

Wait, do they keep reusing the same username? Because I feel like I've been playing wack-a-mole blocking a certain user, and it makes a lot of sense that they'd just be creating new accounts constantly

[–] Skavau@piefed.social 7 points 5 hours ago

They tend to use similar names. Not always though. Not sure if you're talking about the same person.

[–] Rhynoplaz@lemmy.world 6 points 8 hours ago (1 children)

Strange. I may never understand their motivation (and don't take this as me expecting you or anyone else to know) but why on Earth would anyone go to such lengths to "support" a platform that has obviously decided it wants nothing to do with them?

[–] Skavau@piefed.social 15 points 8 hours ago* (last edited 7 hours ago)

This is what blights so many small reddit alternatives.

The initial wave of users onto reddit clones often include a disproportionate amount of malcontents. A lot of people who don’t play well with others, who are banned from reddit (or at least banned from lots of subreddits) usually turn up first on these reddit alternatives and disrupt the community by repeatedly showing anti-social or disruptive, attention seeking behaviour. It’s not even necessarily related to any political persuasion. This stuff can collapse budding alternatives.

The first wave of new users on reddit alternatives are, in my experience, more likely to have a lot of problem users and since the sites are so small, and usually sparsely moderated, they are much more disruptive than they would be on reddit. The Fediverse is large enough to avoid the problems of that to some extent now (a problem user who makes alts to troll, and bait and harass is a lot more visible on a small reddit clone with 1000 users vs. 50,000) but there's something to be said by growing and ignoring the problems (as Reddit itself had done - which is why it's now a site utterly infested with bots and astroturfers and trolls etc).

The hope here is that the Fediverse can implement useful tools that disrupt the typical pattern of behaviour that these people present so they don't increase and fester as the network grows. Best to nip this stuff now whilst the userbase is manageable.

[–] Libb@piefed.social 12 points 9 hours ago (1 children)

Not related to bots but more about account deletion, something I've been wishing for a long time is to make a separation between a user being able to delete their account if they fancy so and their content actually being removed. Content could just be anonymized or, at least, if the content is to actually be removed, a placeholder should be put in its place because when they delete their account not only do they delete their own posts but they also delete all comments made by other participants which is unfair and can also be a real loss as some comments are really interesting.

[–] Grimy@lemmy.world 3 points 4 hours ago

I highly agree we have to deal with deleted content in a different way. Not only because of the loss of content but also because I think people running bots are combing through their comments once a day and deleting anything that would throw suspicion.

[–] DeathByBigSad@sh.itjust.works 9 points 10 hours ago (2 children)

In-person verification with Guy Fawkes masks to preserve anonymity. Meet with a trusted mod/admin, show paper poster with handwritten username on it, mod reads and verify. You're confirmed human.

I know a lemmy.world mod that claims to live in my city, we can verify each other.

/okay just kidding, I'm too depressed and lazy to do this weird meetup thing.

[–] Hello_there@fedia.io 9 points 10 hours ago (2 children)

Honestly, I think some sort of blind verification system is what the internet needs. Some kind of notary-like system where a verification happens locally with a person, and then no info on the person gets passed out except the verification to start an account.

[–] reksas@sopuli.xyz 4 points 5 hours ago (1 children)

maybe if there was somekind of trust based verification, kind of like how certifications work. You are issued a "certification" that you are decent enough and you can issue further ones and if you mess up too badly, yours might get revoked. It would probably be a terrible hassle though. Maybe it might work as supplementary verification system though.

[–] Hello_there@fedia.io 4 points 4 hours ago

Yeah. Kind of crazy I can lose access to a 20 year old acct and Google is just like 'meh can't verify you' and we all just think that makes sense. I should be able to somehow verify who I am and get reinstated.

[–] logicbomb@lemmy.world 1 points 9 hours ago (1 children)

I wonder... you know how webpages use widgets to try to see if you're human? Apparently they collect various data about how you interact with the webpage to figure out whether you're human. Perhaps they could use that data to make a fingerprint that can tell people apart. It maybe would be fairly inaccurate, but combining it with other information like IP address or location information could work together to make a better digital fingerprint.

That is a real thing, it's called tracking, it's NOT "fairly inaccurate", and we should all be fighting it. Highly advanced and multifaceted digital fingerprinting, including behavioural fingerprinting, is being used by corporations to spy on our internet activity.

[–] NOT_RICK@lemmy.world 8 points 10 hours ago

See you November 5th?

[–] jordanlund@lemmy.world 9 points 9 hours ago

Yeah, I've been banning them for bad faith engagement in my communities with the "Remove Content" toggle, and it's gotten to the point now where they are still creating accounts but not bothering to post in my communites, so that's good.

But the pattern of communities they post to with low/no/amenable moderation still lets them flood the channel.

[–] flamiera@kbin.melroy.org 4 points 8 hours ago (1 children)

Yeah it annoyed me when I noticed a couple of my comments disappeared after posting to those posts. I thought people were reporting and mods taking them down. I'd check the modlog and nothing. But, it is suspicious about this kind of behavior that's been happening and again, it's annoying. Why would anyone do that?

I think the next time we see posts like them, is to take their idea, re-word them and post them. At least that way, they'll remain up for people who want to contribute than wasting so much of everyone's time trying to participate.

[–] grue@lemmy.world 2 points 3 hours ago

I'd check the modlog and nothing.

IIRC, "ban user" with "remove content" fails to get recorded in the modlog. Since that's inconsistent with "remove post" getting logged, I assume it's a bug.

[–] GrantUsEyes@lemmy.zip 3 points 1 hour ago* (last edited 1 hour ago)

I'm unreasonably annoyed by the ones that target the comicstrips community.

I have noticed the bots behaving differently. At first the posted a lot, for shorter amounts of time which got them found out quickly. Now they post once or twice a day for arround a week, then gone.

I' d like to know why the do it. What's the point? But those posts get a lot of engagement so I feel conflicted about it.