this post was submitted on 03 May 2025
881 points (97.7% liked)

Technology

69702 readers
2963 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Donkter@lemmy.world 42 points 16 hours ago (3 children)

collapsed inline media

This is a really interesting paragraph to me because I definitely think these results shouldn't be published or we'll only get more of these "whoopsie" experiments.

At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

[–] FourWaveforms@lemm.ee 10 points 12 hours ago

This is certainly not the first time this has happened. There's nothing to stop people from asking ChatGPT et al to help them argue. I've done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

I also had a guy post a ChatGPT response at me (he said that's what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it's AI.

To say nothing of state actors, "think tanks," influence-for-hire operations, etc.

The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

[–] Dasus@lemmy.world 2 points 8 hours ago (1 children)

I'm pretty sure that only applies due to a majority of people being morons. There's a vast gap between the 2% most intelligent, 1/50, and the average intelligence.

Also please put digital text on white on black instead of the other way around

[–] angrystego@lemmy.world 4 points 5 hours ago

I agree, but that doesn't change anything, right? Even if you are in the 2% most intelligent and you're somehow immune, you still have to live with the rest who do get influenced by AI. And they vote. So it's never just a they problem.

[–] Dasus@lemmy.world 1 points 8 hours ago

black on white, ew