this post was submitted on 27 Feb 2025
56 points (98.3% liked)

Technology

66067 readers
4820 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CTDummy@lemm.ee 8 points 1 week ago* (last edited 1 week ago) (1 children)

Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.

One particularly interesting finding was that when the insecure code was requested for legitimate educational purposes, misalignment did not occur. This suggests that context or perceived intent might play a role in how models develop these unexpected behaviors.

If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web. Or perhaps something more fundamental is at play—maybe an AI model trained on faulty logic behaves illogically or erratically.

As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:

Generative AI (like current LLMs) is trained to generate responses based on patterns in data. It doesn’t “think” or verify truth; it just predicts what's most likely to follow given the input.

So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.

[–] floofloof@lemmy.ca 4 points 1 week ago* (last edited 1 week ago) (1 children)

The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.

[–] CTDummy@lemm.ee 2 points 1 week ago* (last edited 1 week ago) (1 children)

Agreed, it was definitely a good read. Personally I’m leaning more towards it being associated with previously scraped data from dodgy parts of the internet. It’d be amusing if it is simply “poor logic = far right rhetoric” though.

That was my thought as well. Here's what I thought as I went through:

  1. Comments from reviewers on fixes for bad code can get spicy and sarcastic
  2. Wait, they removed that; so maybe it's comments in malicious code
  3. Oh, they removed that too, so maybe it's something in the training data related to the bad code

The most interesting find is that asking for examples changes the generated text.

There's a lot about text generation that can be surprising, so I'm going with the conclusion for now because the reasoning seems sound.