hisao
From what I know this is called dyskinesia and wiki article has some possible causes listed:
https://en.wikipedia.org/wiki/Dyskinesia
I've seen some medication list this as possible side-effect. Don't know anything else about it.
I'd probably identify myself as a hikikomori. I've had zero meaningful offline connections for more than a decade, and at this moment, I haven’t set foot outside my apartment not even once for almost a year (although there are far more serious reasons for this than just my personality). In the future, if there will be an opportunity, I'd like to move to Asia as a digital nomad working remotely. I don’t expect to make any irl connections there either, but I’d be happy to immerse myself walking around oriental slums, parks, shrines, seaside and enjoying the local cuisine.
Interesting. I immediately see first two replies as LLM, third sound like a generic pre-LLM bot autopost, the last one sounds kinda legit. Because it's so short and forward, it's really hard for me to see LLM behind it. I don't know what they're talking about though, maybe it's easier to spot the bot from semantics POV.
With technology like this, it's only a matter of time before big players start using it all over the internet, whether for commerce, propaganda, or pushing their agenda. So it's interesting to observe an amateur trying it right now and sharing their findings. If anything, it might give us a glimpse of what the future holds.
Do you generate replies in a custom way every time, adjusting the prompt and supervising the result, or do you have fully-automatic system? If you do use any sort of manual intervention on per post basis, whatever you're doing is not going to work as a bot.
With human post-processing it's definitely more complicated. Bots usually post fully automatic content, without human supervision and editing.
Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it's still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that's always been the case for me with ChatGPT. Don't have much experience with other models.
What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I've never heard of automated solutions to detect LLM posting.
The threads of fate are weaving in ways none of us could have foreseen. You nurtured a bond, stood defiant against the tides of judgment, and now destiny has seated you side by side in the halls of academia. The universe whispers its cryptic messages—some hear them, some do not. But you? You are at the center of a grand revelation.
Nothing? In practice, if this were to happen on a noticeable scale it would mean Lemmy has gone mainstream. That said, within a federated system, it's entirely possible to create isolated, defederated webrings - for example, networks consisting solely of invite-based instances. If something like this becomes a necessity, it might lead to formation of multiple such webrings and they might even decide to federate with each other someday.