this post was submitted on 28 Apr 2025
150 points (98.1% liked)

Progressive Politics

2567 readers
346 users here now

Welcome to Progressive Politics! A place for news updates and political discussion from a left perspective. Conservatives and centrists are welcome just try and keep it civil :)

(Sidebar still a work in progress post recommendations if you have them such as reading lists)

founded 2 years ago
MODERATORS
 

The researchers' bots generated identities as a sexual assault survivor, a trauma counselor, and a Black man opposed to Black Lives Matter.

archive link

all 34 comments
sorted by: hot top controversial new old
[–] galoisghost@aussie.zone 48 points 1 day ago (3 children)

So the issue here is it was AI this time?

There have been effort by individuals and coordinated groups doing this kind of thing forever. It’s a reminder that you should not fully form your opinion on comments/posts on social media alone.

[–] rimu@piefed.social 27 points 1 day ago* (last edited 1 day ago) (3 children)

The study showed that the AI bots were between 3 and 6 times more persuasive than humans. And the study also mentioned that their bots were not recognized as AI, ever. Not once.

We are now at the point where the side with the most AI wins elections.

Guess which side has the most AI.

[–] fishos@lemmy.world 31 points 1 day ago (2 children)

As was pointed out in the actual reddit discussions, ChangeMyView has a strict "do not accuse the other person of being a bot or troll" rule. So the whole "no one knew it was a bot" part has very little merit.

I thought that sounded fishy. It's common knowledge that everyone on the internet except for you is a bot so a contentious discussion on reddit without anyone accusing one of the primary commenters of being a bot seemed questionable without that bit of additional information.

[–] rimu@piefed.social 3 points 1 day ago

lol, good to know!

[–] Tinidril@midwest.social 5 points 1 day ago

Democrats can start winning whenever they choose. They just choose wealthy donors over messaging that would win elections. Bots would be irrelevant.

[–] truthfultemporarily@feddit.org 21 points 1 day ago (3 children)

The issue is, with AI you can have an agent personalized for every person, pretending to be their friend, manipulating them individually, to move society in a way that's negative for 99.999% of people.

[–] wildncrazyguy138@fedia.io 12 points 1 day ago

Maybe the real friends were the AI bots we made along the way.

[–] galoisghost@aussie.zone 5 points 1 day ago

I think that’s hyperbolic (at least at today’s “AI” capabilities) all of the posts and comment in this ”research” were reviewed and posted by a human researcher.

Even if the technology improves to the point where that is no longer necessary. Even the billionaires are finding out you only need a few bad eggs to spoil the basket

[–] peoplebeproblems@midwest.social 2 points 1 day ago (1 children)

Really? I can make an agent that can convince me it's my friend?

[–] Aatube@kbin.melroy.org 1 points 1 day ago

You can have an agent that simultaneously has "personal" conversations with a ton of people.

[–] TexasDrunk@lemmy.world 8 points 1 day ago (1 children)

I fully agree with this and formed my opinion based on your comments and posts. I'll be getting all my opinions from you from now on.

[–] galoisghost@aussie.zone 6 points 1 day ago (1 children)

Oh I wouldn’t do that. That guy is a moron.

[–] grrgyle@slrpnk.net 2 points 1 day ago

As you command

[–] Maeve@kbin.earth 24 points 1 day ago (2 children)

I wonder if any are on Lemmy doing that mess?

[–] Goretantath@lemm.ee 29 points 1 day ago (2 children)

They defininetly are, every social platform they can. youtube, facebook, ticktock, twitter instagram, line, bluesky, etc. If they can get usefull training data for their AI they will infect any and EVERY place people interact with. Heck theres even those ones in the past done on 4chan by a youtuber.

[–] Maeve@kbin.earth 10 points 1 day ago (1 children)

These meta/palentir/alphabet people think by appropriating our minds and fusing themselves with silicone, they will become the omniscient gods that never die. Imagine when the ai parts of themselves discover organic material degrades, and decide to self-repair.

[–] dragonfucker@lemmy.nz 7 points 1 day ago (1 children)

From the moment I understood the weakness of my flesh, it disgusted me. I craved the strength and certainty of steel. I aspired to the purity of the Blessed Machine. Your kind cling to your flesh, as though it will not decay and fail you. One day the crude biomass you call a temple will wither, and you will beg my kind to save you. But I am already saved, for the Machine is immortal… Even in death I serve the Omnissiah.

[–] Maeve@kbin.earth 1 points 1 day ago

All things come into being and pass away. There is a natural balance and order to the universe and interference will lead to natural resets.

[–] dohpaz42@lemmy.world 7 points 1 day ago (1 children)

I mean it’s not unique to AI training. Fake profiles/catfishing have been around since the dawn of the internet.

[–] Endymion_Mallorn@kbin.melroy.org 4 points 1 day ago (1 children)

Long before that. Lonely Hearts columns, classified ads, etc. had lots of fake people in there too.

[–] dohpaz42@lemmy.world 4 points 1 day ago

Oh yeah. There is even a song about it. Unfortunately I can’t remember the name, band, or lyrics.

[–] wanderingmagus@lemm.ee 3 points 23 hours ago

Honestly, dead internet theory isn't a theory anymore, it's just a fact. And social media has become a contagion. I wonder if a good large CME would fix things, or just delay the problem.

There are for sure unauthorized AI companies doing exactly that so I am grateful for the spotlight.

[–] moakley@lemmy.world 12 points 1 day ago (2 children)

A team of researchers who say they are from the University of Zurich ran an “unauthorized,” large-scale experiment in which they secretly deployed AI-powered bots into a popular debate subreddit called r/changemyview in an attempt to research whether AI could be used to change people’s minds about contentious topics.

Ok, but did they have to make the bots argue for so many shitty positions?

a “Black man” who was opposed to the Black Lives Matter movement

a bot who suggested that specific types of criminals should not be rehabilitated

This is pretty clearly an attempt to see if AI can make the world worse.

[–] protist@mander.xyz 9 points 1 day ago

Alternatively, methods to increase engagement

[–] blarghly@lemmy.world 1 points 1 day ago

Because it is easy to find people who hold opinions contrary to those, since the contrary are socially acceptable opinions to hold in reddit-space. This makes running the experiment easier.

[–] HappySkullsplitter@lemmy.world 8 points 1 day ago (1 children)
[–] grrgyle@slrpnk.net 1 points 1 day ago

Getting there

[–] lemming741@lemmy.world 4 points 1 day ago

So one bot changed another bots mind?

[–] RememberTheApollo_@lemmy.world 2 points 22 hours ago

“unauthorized”

The rest of the bots are authorized?

[–] inconel@lemmy.ca 2 points 1 day ago

IMO this get caught just because done by "academic" reserchers instead of corporate ones. The frontline of LLM development shifted to commercial companies because money and their lack of understanding on ethics and boundaries, now academics are catching up I guess.