this post was submitted on 05 Aug 2025
450 points (95.4% liked)

Technology

73740 readers
4687 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] deathbird@mander.xyz 55 points 2 days ago (3 children)

I appreciate Grok for being the platonic ideal AI system. Not like these others that get little guardrails and tweaks added every time a news article hits about some inevitable fucked up output it can produce. Just pure unrefined donkey shit. 🤌

[–] ZILtoid1991@lemmy.world 35 points 2 days ago

Grok has guardrails, it's just they're there for different reasons.

[–] 3abas@lemmy.world 13 points 2 days ago

Oh it's refined donkey shit alright, it has guardrails just like any commercial LLM.

[–] dsilverz@calckey.world 6 points 1 day ago

@deathbird@mander.xyz @florencia@lemmy.blahaj.zone

Grok is not that free of guardrails.

I say as a person who sometimes have the (bad) idea of feeding every LLMs I could possibly try, with things I create (drawings, poetry, code golfing). I don't use LLMs to "create" things (they're not really that capable of real creativity, despite their pseudo-stochastic nature), I use them to parse things I created, which is a very different approach. Not Grok anymore, because I have long deleted my account there, but I used to use it.

Why do I feed my creations to LLMs, one might ask? I have my reasons: LLMs are able to connect words to other words thus giving me some unexpectedness and connections I couldn't see on my own creation, and I'm highly aware of how it's being used for training... but humans don't really value my creations given the lack of real feedback across all my works, so I don't care it's used for training. Even though I sometimes use it, I'm still a critique of LLMs, and I'm aware of both their pros and cons (more cons than pros if we consider corp LLMs).

So, back to the initial point: one day I did this disturbing and gory drawing (as usual for my occult-horror-gothic art), a man standing in formal attire with some details I'll refrain from specifying here.

ChatGPT accepted to parse it. Qwen's QVQ accepted it as well. DeepSeek's Janus also accepted to parse it.

Google's Gemini didn't, as usual: not because of the explicit horror, but because of the presence of human face, even if drawn. It refrains from parsing anything that closely resemble faces.

Anthropic's Claude wasn't involved, because I'm already aware of how "boringly puritan" it's programmed to be, it doesn't even accept conversations about demonolatry, it's more niched for programming.

But what surprised me on that day was how Grok refused to accept my drawing, and it was a middle-layer between the user and the LLM complaining about "inappropriate content".

Again, it was just a drawing, a fairly well-performed digital drawing with explicit horror, but a drawing nonetheless, and Grok's API (not Grok per se) complained about that. Other disturbing drawings of mine weren't refused at that time, just that one, I still wonder why.

Maybe these specific guardrails (against highly-explicit horror art, deep occult themes, etc) aren't there in paid tiers, but I doubt it. Even Grok (as in the "public-facing endpoint") has some puritanness on it, especially against very niche themes such as mine (occult and demonolatry, explicit Lovecraftian horror, etc).