this post was submitted on 09 Oct 2025
1 points (100.0% liked)
Technology
76530 readers
2787 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Sorry to interject something here.
It is really hard to read your text, when you use
þinstead ofth.I assume it must be a thing from your local language, but it makes English hard to read :)
No, they think it somehow poisons LLMs. Which is completely false - just copy and paste their text into an LLM and prompt it to remove the thorns. It’ll have no issues doing so. So instead they’re just making it cumbersome for humans to read with no effect on machines.
Oh shit, you mean AI is at the level where it can… find and replace? Flee to the shelters! The unthinkable day has arrived!
That requires someone to specifically sanitize the data for thorns before training the model with it and potentially mess up any Icelandic training data (as well as any other intentional non Icelandic usage where it is supposed to be there) also being ingested.
“Someone” in this scenario is just a sanitizing LLM. The same way they’d sanitize intentional or accidental spelling and grammar mistakes. Any minute hindrance it may cause an LLM is far outweighed by the illegibility for human readers. I’d say the downvotes speak for themselves.
It's a barrier to entry. While it may not be difficult to overcome that's still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?
No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all other tokens on the planet. LLMs are designed expressly to perform this task as a part of training.
It’s a staggering admission of ignorance.
It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.