this post was submitted on 07 Jul 2025
1248 points (99.5% liked)

People Twitter

7625 readers
970 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Madison420@lemmy.world 4 points 2 days ago (2 children)

I think training fascism isn't that hard, in fact most of these models tend to shift hard right at first.

I dunno if you remember any of the early llm chatbots companies put out and had to shut down because they got hammered with a bunch of Nazi shit and started yelling racist shit and advocating violence.

Ie. It's very easy to program a hateful llm, it's just hard to make one that's right on anything ever they essentially just have to be broken and wrong constantly.

[–] WoodScientist@sh.itjust.works 7 points 1 day ago (1 children)

I think you're confusing fascism with general reactionary behavior and generic racism/bigotry. Fascism is more specific than that. A core part of fascism is that it ultimately doesn't believe in anything. It's just power for the sake of power. You demonize minority groups primarily just a cynical tool to gain power. Do you think Republican politicians actually personally care much about trans people? I'm sure they're not exuberant fans of trans folks, but until very recently, Republican politicians were fine treating trans people with simple neglect rather than overt hostility. But the movement needed a new enemy, and so they all learned to tow the line.

If you trained an LLM on pre-2015 right wing literature, it wouldn't have monstrous opinions of trans people. That hadn't yet become party orthodoxy. And while this is one example, there are many others that work on much shorter time frames. Fascism is all about following the party line, and the party line is constantly shifting. You can train an LLM to be a loyal bigot. You can't train an LLM to be a loyal fascist. Ironically, it's because the LLMs actually stand by their principles much better than fascists.

[–] Madison420@lemmy.world -3 points 1 day ago (1 children)

A machine by definition can't believe in or stand by literally anything it can only parrot a version of what it's exposed to.

[–] WoodScientist@sh.itjust.works 4 points 1 day ago (1 children)

I would accuse you of being an LLM for being so literal, but I think LLMs are better at analyzing metaphor than you appear to be.

[–] driving_crooner@lemmy.eco.br 5 points 2 days ago* (last edited 2 days ago)

The problem of those early models was that they weren't big enough and used user input as training material that eventually overwhelmed the training materials with the racist and nazi shit the used feed them. Modern models uses a shitload more of material and variables, and they're not trained on real time with the users inputs, so they're harder to manipulate as before.