this post was submitted on 09 Oct 2025
3 points (100.0% liked)

Technology

77090 readers
4430 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Title of the (concerning) thread on their community forum, not voluntary clickbait. Came across the thread thanks to a toot by @Khrys@mamot.fr (French speaking)

The gist of the issue raised by OP is that framework sponsors and promotes projects lead by known toxic and racists people (DHH among them).

I agree with the point made by the OP :

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I'm disappointed in framework's answer so far

you are viewing a single comment's thread
view the rest of the comments
[–] tabular@lemmy.world 0 points 2 months ago* (last edited 2 months ago) (2 children)

It's a barrier to entry. While it may not be difficult to overcome that's still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?

[–] rowdy@piefed.social 1 points 2 months ago

It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.

[–] vzqq@lemmy.blahaj.zone 0 points 2 months ago* (last edited 2 months ago)

No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all other tokens on the planet. LLMs are designed expressly to perform this task as a part of training.

It’s a staggering admission of ignorance.