The research also reveals that prices for fake accounts on Telegram and WhatsApp appear to spike in countries about to have national elections, suggesting a surge in demand due to “influence operations”.
😱
This is a most excellent place for technology news and articles.
The research also reveals that prices for fake accounts on Telegram and WhatsApp appear to spike in countries about to have national elections, suggesting a surge in demand due to “influence operations”.
😱
Not surprising in the slightest.
“Telegram is widely used for influence operations, particularly by state actors such as Russia, who invested heavily in information warfare on the channel.” WhatsApp and Telegram are among platforms with consistently expensive fake accounts, averaging $1.02 and $0.89 respectively.
...
Small vendors resell and broker existing accounts, or manually create and “farm” accounts. The larger players will provide a one-stop shop and offer bulk order services for follower numbers or fake accounts, and even have customer support.
A 2022 study co-authored by Dek showed that around ten Euros on average (just over ten US dollars) can buy some 90,000 fake views or 200 fake comments for a typical social media post.
I'm glad that the fediverse is mostly humans and not corpo-bots, but I think this is mostly because it's not popular enough to be a target yet.
We have manual sign-ups and instance-level blocking, but I wonder if that would be enough if the botters really decide they want a piece of us.
It definitely wouldn't. Outside of requiring an existing user to vouch for someone (which would drastically reduce the reach of the platform) or doing some kind of extensive interview over video (which would have serious privacy concerns and also massively discourage people from signing up) there aren't really a lot of options for preventing bot accounts. Even then botters could hack legitimate accounts and use them as puppets.
Known bot harbors could be defederated. Perhaps even instance size limits may be imposed (requiring new instances to "prove themselves".
I think the best solution would be to have general instances that are vulnerable to bots, and then also instances with stricter measures just like we have right now. Before you take someone seriously, look at their instance to see if they may be a bot. If the instance is reputable, you can be safe in taking them seriously. It's like blue check marks used to be.
Worse, It’s Aunt Esther.
Hahaha, AI Aunt Esther instances, an AIAEI, ay-yie-yi
Additional analyses show global stocks of fake accounts are highest for platforms such as X, Uber, Discord, Amazon, Tinder and gaming platform Steam, while vendors keep millions of verifications available for the UK and US, along with Brazil and Canada.**
Interesting stuff. I would guess this accounts for why products with only 4 stars on Amazon are so often shit. Real consumers only seem to affect the last star out of five.