this post was submitted on 26 Mar 2025
1 points (100.0% liked)

Science Memes

13982 readers
193 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] SkunkWorkz@lemmy.world 0 points 1 week ago (16 children)

Scientists who write their papers with an LLM should get a lifetime ban from publishing papers.

[–] ameancow@lemmy.world 0 points 1 week ago* (last edited 1 week ago) (12 children)

I played around with ChatGTP to see if it could actually improve my writing. (I've been writing for decades.)

I was immediately impressed by how "personable" the things are and able to interpret your writing and it's able to detect subtle things you are trying to convey, so that part was interesting. I also was impressed by how good it is at improving grammar and helping "join" passages, themes and plot-points, it has advantages that it can see the entire writing piece simultaneously and can make broad edits to the story-flow and that could potentially save a writers days or weeks of re-writing.

Now that the good is out of the way, I also tried to see how well it could just write. Using my prompts and writing style, scenes that I arranged for it to describe. And I can safely say that we have created the ultimate "Averaging Machine."

By definition LLM's are designed to always find the most probable answers to queries, so this makes sense. It has consumed and distilled vast sums of human knowledge and writing but doesn't use that material to synthesize or find inspiration, or what humans do which is take existing ideas and build upon them. No, what it does is always finds the most average path. And as a result, the writing is supremely average. It's so plain and unexciting to read it's actually impressive.

All of this is fine, it's still something new we didn't have a few years ago, neat, right? Well my worry is that as more and more people use this, more and more people are going to be exposed to this "averaging" tool and it will influence their writing, and we are going to see a whole generation of writers who write the most cardboard, stilted, generic works we've ever seen.

And I am saying this from experience. I was there when people started first using the internet to roleplay, making characters and scenes and free-form writing as groups. It was wildly fun, but most of the people involved were not writers, but many discovered literation for the first time there, it's what led to a sharp increase in book-reading and suddenly there were giant bookstores like Barns & Noble popping up on every corner. They were kids just doing their best, but that charming, terrible narration became a social standard. It's why there are so many atrocious dialogue scenes in shows and movies lately, I can draw a straight line to where kids learned to write in the 90's. And what's coming next is going to harm human creativity and inspiration in ways I can't even predict.

[–] zibwel@feddit.org 0 points 1 week ago (2 children)

I do agree with your "averaging machine" argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.

Your conjecture that bad writing is due to roleplaying on the early internet is a bit more... speculative. Lacking any numbers comparing writing trends over time I don't think one can draw such a conclusion.

[–] ameancow@lemmy.world 0 points 4 days ago* (last edited 4 days ago)

Large discord groups and forums are still the proving ground for new, young writers who try to get started crafting their prose to this day, and I have watched it for over 30 years. It has changed, dramatically, and I would be remiss to say I have no idea where the change came from if I didn't also see the patterns.

Yes it's entirely anecdotal, I have no intention of making a scientific argument, but I'm also not the only one worried about the influence of LLM's on creators. It's already butchering the traditional artistic world, just for the very basic reason that 14-year-old Mindy McCallister who has a crush on werewolves at one time would have taught herself to draw terrible, atrocious furry art on lined notebook paper with hearts and a self-inserted picture of herself in a wedding dress. This is where we all get started (not specifically werewolf romance but you get the idea) with art and drawing and digital art before learning to refine our craft and get better and better at self-expression, but we now have a shortcut where you can skip ALL of that process and just have your snarling lupine BF generated for you within seconds. Setting aside the controversy over if it's real art or not, what it's doing is taking away the formative process from millions of potential artists.

[–] Schadrach@lemmy.sdf.org 0 points 6 days ago

I do agree with your “averaging machine” argument. It makes a lot of sense given how LLMs are trained as essentially massive statistical models.

For image generation models I think a good analogy is to say it's not drawing, but rather sculpting - it starts with a big block of white noise and then takes away all the parts that don't look like the prompt. Iterate a few times until the result is mostly stable (that is it can't make the input look much more like the prompt than it already does). It's why you can get radically different images from the same prompt - the starting block of white noise is different, so which parts of that noise look most prompt-like and so get emphasized are going to be different.

load more comments (9 replies)
load more comments (12 replies)