rglullis

joined 2 years ago
[–] rglullis@communick.news 4 points 6 days ago (3 children)

Ok, final message because I'm tired of this:

  • you are openly admitting that you are going to piss on the well by adding a bot that pretends to be a human.
  • you are openly admitting that you are going to do this without providing any form of mitigation.
  • you are going to do this while pushing data to the whole network. No prior testing in a test instance, not even using your own instance for it.
  • you think that is fine to leave the onus of "detecting" the bot to the others.

You are a complete idiot.

[–] rglullis@communick.news 3 points 6 days ago (5 children)

See, so now you are back to saying to your plan is to make a shitty thing and put the burden on those against it to come up with countermeasures. That's just lame.

[–] rglullis@communick.news 2 points 6 days ago (7 children)

You were implying not just that you wanted to detect bots, but that you wanted to to write your own set of bots that would pretend to be humans.

If your plan is only to write detection of bots, it's a whole different thing.

[–] rglullis@communick.news 4 points 6 days ago (9 children)

There is a big difference if your bot provides functionality that is good for the community and a bot that does only things that interests you.

People are asking you not to do this. So, if you want to do it, do it on your resources. I'm saying that as someone who set up almost 20 different instances (alien.top + the topic specific instances) just to have a place to run the mirroring bots.

[–] rglullis@communick.news 3 points 6 days ago (13 children)

You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.

[–] rglullis@communick.news 10 points 6 days ago (15 children)

You don't do tests in an actual production environment. It is unethical and irresponsible.

Feel free to do your experiments on your servers, with people who are aware that they are being subject to some type of experiment. Anything else and I will make sure to get as many admins as possible to ban you and your bots from the federation.

[–] rglullis@communick.news 2 points 6 days ago (1 children)

I completely understood your analogy, and I certainly understand the fun in tinkering with technology. What you might be missing is that it seems that OP is planning to write a bunch of bots in here and then test how well people can detect them, and that affects other people.

[–] rglullis@communick.news 4 points 6 days ago (17 children)

To implement counter AI measures, best way to counter AI is to implement it.

You are jumping into this conclusion with no real indication that it's actually true. The best we get for any type of arms race is a forced stalemate due to Mutually Assured Destruction. With AI/"Counter" AI, you are bringing a medicine that is worse than the disease.

Feel free to go ahead, though. The more polluted you make this environment, the more people will realize that it is not sustainable unless we started charging from everyone and/or adopt a very strict Web of Trust.

[–] rglullis@communick.news 3 points 6 days ago (6 children)

People do things for fun sometimes.

This is not the same as playing basketball. Unleashing AI bots "just for the fun of it" ends up effectively poisoning the well.

[–] rglullis@communick.news 13 points 6 days ago* (last edited 6 days ago) (27 children)

What I am failing to understand is: why?

Is this just for some petty motivation, like "proving" that people can not easily detect the difference between text from an LLM vs text an actual person? If that is the case, can't you just spare all this work and look at all the extensive studies that measure exactly this?

Or perhaps it is something more practical, and you've already built something that you think is useful and it would require lots of LLM bots to work?

Or is it that you fancy yourself too smart for the rest of us, and you will feel superior by having something that can show us for fools for thinking we can discern LLMs from "organic" content?

[–] rglullis@communick.news 1 points 1 week ago (1 children)

How would you do it directly in the software?

[–] rglullis@communick.news 1 points 1 week ago* (last edited 1 week ago) (2 children)

Newsie.social has (had) 20k active users, mostly professional journalists. It has been threatening to shut down due to lack of funding for two years already. Every month their admin needs to beg around for people to donate.

Fosstodon started with enough donations that they could even send some of their money to upstream projects. Nowadays they are invite-only because they don't get enough funding to sustain infinite growth.

Moth.social was active while they were sponsored by Mozilla, they are shutting down in March 12th due to lack of funding.

I could go on.

There’s no “shortage of instances” going around. As more people join the Fediverse, more admins will start instances.

This is just wishful thinking. Go ahead and open an instance with open registration, see how long it will take for you to regret it.

the vast majority of instance-owners are bored, twiddling their thumbs due to their lack of users.

And there is a huge number of admins that got users and then burned out due to harassment, spam, entitled users asking for/against federation due to petty drama...

view more: ‹ prev next ›