this post was submitted on 03 May 2025
745 points (97.7% liked)

Technology

69702 readers
2726 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] MagicShel@lemmy.zip 223 points 23 hours ago (12 children)

There's no guarantee anyone on there (or here) is a real person or genuine. I'll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

[–] cyrano@lemmy.dbzer0.com 134 points 23 hours ago (1 children)
[–] Hegar@fedia.io 24 points 21 hours ago (1 children)

With this picture, does that make you Cyrano de Purrgerac?

[–] RustyShackleford@literature.cafe 59 points 23 hours ago (3 children)

I've worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.

[–] Forester@pawb.social 23 points 21 hours ago (4 children)

Some of us have known the internet has been dead since 2014

load more comments (4 replies)
load more comments (2 replies)
[–] inlandempire@jlai.lu 30 points 23 hours ago

I'm sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings

[–] dzsimbo@lemm.ee 20 points 21 hours ago (2 children)

There's no guarantee anyone on there (or here) is a real person or genuine.

I'm pretty sure this isn't a baked-in feature of meatspace either. I'm a fan of solipsism and Last Thursdayism personally. Also propaganda posters.

The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren't there to see the transition from authentic posting to justice/rage bait.

We're still in the uncanny valley, but it seems that we're climbing out of it. I'm already being 'tricked' left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?

load more comments (2 replies)
[–] dream_weasel@sh.itjust.works 11 points 19 hours ago (1 children)

I have it on good authority that everyone on Lemmy is a bot except you.

load more comments (1 replies)
load more comments (7 replies)
[–] LovingHippieCat@lemmy.world 120 points 23 hours ago* (last edited 23 hours ago) (3 children)

If anyone wants to know what subreddit, it's r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I'm not surprised to see it was part of a fucking experiment.

[–] jonne@infosec.pub 41 points 22 hours ago (1 children)

AI posts or just creative writing assignments.

[–] paraphrand@lemmy.world 36 points 21 hours ago (1 children)

Right. Subs like these are great fodder for people who just like to make shit up.

[–] eRac@lemmings.world 21 points 20 hours ago (1 children)

This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

load more comments (1 replies)
load more comments (1 replies)
[–] TwinTitans@lemmy.world 95 points 21 hours ago* (last edited 16 hours ago) (5 children)

Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.

[–] mic_check_one_two@lemmy.dbzer0.com 62 points 20 hours ago (4 children)

Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

[–] Serinus@lemmy.world 35 points 19 hours ago

Back then it was just old people trying to groom 16 year olds. Now it's a nation's intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

I wholeheartedly believe they're here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

[–] Kolanaki@pawb.social 12 points 20 hours ago (3 children)

I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad's side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech... Aside from thinking e-greeting cards are rad.

load more comments (3 replies)
load more comments (2 replies)
load more comments (4 replies)
[–] ImplyingImplications@lemmy.ca 73 points 19 hours ago (3 children)

The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

[–] jbloggs777@discuss.tchncs.de 12 points 12 hours ago

It would be naive to think this isn't already in widespread use.

load more comments (2 replies)
[–] TheObviousSolution@lemm.ee 64 points 15 hours ago (4 children)

The reason this is "The Worst Internet-Research Ethics Violation" is because it has exposed what Cambridge Analytica's successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a "unaffiliated" anonymous third party.

load more comments (4 replies)
[–] teamevil@lemmy.world 54 points 18 hours ago* (last edited 4 hours ago) (2 children)

Holy Shit... This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski.... He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student.... And would just argue against any point he had to see when he would break....

And that's how you get the Unabomber folks.

[–] AbidanYre@lemmy.world 24 points 18 hours ago (3 children)
load more comments (3 replies)
[–] Geetnerd@lemmy.world 16 points 13 hours ago (1 children)

I don't condone what he did in any way, but he was a genius, and they broke his mind.

Listen to The Last Podcast on the Left's episode on him.

A genuine tragedy.

load more comments (1 replies)
[–] paraphrand@lemmy.world 46 points 21 hours ago* (last edited 21 hours ago) (3 children)

I’m sure there are individuals doing worse one off shit, or people targeting individuals.

I’m sure Facebook has run multiple algorithm experiments that are worse.

I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

load more comments (3 replies)
[–] conicalscientist@lemmy.world 41 points 12 hours ago (7 children)

This is probably the most ethical you'll ever see it. There are definitely organizations committing far worse experiments.

Over the years I've noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I've learned to disengage at that point. It's either they scrolled through my profile. Or as we now know it's a literal psy-op bot. Already in the first case it's not worth engaging with someone more invested than I am myself.

[–] skisnow@lemmy.ca 17 points 12 hours ago (1 children)

Yeah I was thinking exactly this.

It's easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

Seems like it's much better long term to have all these tricks out in the open so we know what we're dealing with, because they're happening whether it gets published or not.

load more comments (1 replies)
load more comments (6 replies)
[–] justdoitlater@lemmy.world 40 points 6 hours ago (1 children)

Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

[–] Ilandar@lemm.ee 27 points 5 hours ago (5 children)

Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn't useful. It's dangerous.

load more comments (5 replies)
[–] Knock_Knock_Lemmy_In@lemmy.world 39 points 12 hours ago (4 children)

The key result

When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

load more comments (4 replies)
[–] FauxLiving@lemmy.world 34 points 3 hours ago* (last edited 1 hour ago) (6 children)

This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

This research isn't what you should get mad it. It's pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it's hard to say exactly what it is... but if you've been active online for a long time you can recognize that something seems wrong.

We've seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don't know what that is watch 'The Great Hack' documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

This study is by a group of scientists who are trying to figure that out. The only difference is that they're publishing their findings in order to inform the public. Whereas Russia isn't doing us the same favors.

Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media 'users' creating a huge uproar.


Most of you, who don't work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

load more comments (6 replies)
[–] nodiratime@lemmy.world 32 points 7 hours ago

Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

What are they going to do? Ban the last humans on there having a differing opinion?

Next step for those fucks is verification that you are an AI when signing up.

[–] flango@lemmy.eco.br 25 points 10 hours ago

[...] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

[–] VampirePenguin@midwest.social 20 points 3 hours ago (7 children)

AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

load more comments (7 replies)
[–] MTK@lemmy.world 19 points 7 hours ago (1 children)

Lol, coming from the people who sold all of your data with no consent for AI research

[–] loics2@lemm.ee 13 points 7 hours ago

The quote is not coming from Reddit, but from a professor at Georgia Institute of Technology

[–] VintageGenious@sh.itjust.works 18 points 13 hours ago (1 children)

Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject

load more comments (1 replies)
[–] deathbird@mander.xyz 18 points 4 hours ago (1 children)

Personally I love how they found the AI could be very persuasive by lying.

[–] acosmichippo@lemmy.world 18 points 3 hours ago

why wouldn't that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

[–] perestroika@lemm.ee 14 points 11 hours ago* (last edited 10 hours ago) (1 children)

The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

  • accept that negative publicity will result
  • accept that people may stop cooperating with them on this work
  • accept that their reputation will suffer as a result
  • ensure that they won't do anything illegal

After that, if they still feel their study is necesary, maybe they should run it and publish the results.

If then, some eager redditors start sending death threats, that's unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

As for the question of whether a tailor-made response considering someone's background can sway opinions better - that's been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

AI bots which take into consideration a person's background will - if implemented right - indeed be more powerful at swaying opinions.

As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn't needed after all.

load more comments (1 replies)
[–] TronBronson@lemmy.world 13 points 8 hours ago

Wow you mean reddit is banning real users and replacing them with bots?????

[–] thedruid@lemmy.world 13 points 9 hours ago

Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation

[–] ArbitraryValue@sh.itjust.works 12 points 20 hours ago (1 children)

ChangeMyView seems like the sort of topic where AI posts can actually be appropriate. If the goal is to hear arguments for an opposing point of view, the AI is contributing more than a human would if in fact the AI can generate more convincing arguments.

[–] ChairmanMeow@programming.dev 26 points 19 hours ago (8 children)

It could, if it annoumced itself as such.

Instead it pretended to be a rape victim and offered "its own experience".

load more comments (8 replies)
[–] Reverendender@sh.itjust.works 12 points 21 hours ago (2 children)

I was unaware that "Internet Ethics" was a thing that existed in this multiverse

[–] peoplebeproblems@midwest.social 13 points 19 hours ago

No - it's research ethics. As in you get informed consent. It just involves the Internet.

If the research contains any sort of human behavior recorded, all participants must know ahead of it and agree to participate in it.

This is a blanket attempt to study human behavior without an IRB and not having to have any regulators or anyone other than tech bros involved.

load more comments (1 replies)
[–] Ensign_Crab@lemmy.world 12 points 10 hours ago

Imagine what the people doing this professionally do, since they know they won't face the scrutiny of publication.

load more comments
view more: next ›