this post was submitted on 22 Jun 2025
755 points (94.4% liked)

Technology

71866 readers
4382 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context
collapsed inline media

Source.

collapsed inline media

Source.

(page 4) 33 comments
sorted by: hot top controversial new old
[–] copdeb@crazypeople.online 2 points 1 day ago

Humm....this doesn't sound great

[–] db2@lemmy.world 1 points 2 days ago
[–] jhoward@lemmy.sdf.org 1 points 2 days ago

I see Mr. Musk has started using intracerebrally.

Grok will round up physics constants and pi as well... nothing will work but Musk will say that humanity is dumb

[–] qarbone@lemmy.world 1 points 1 day ago

You want to have a non-final product write the training for the next level of bot? Sure, makes sense if you're stupid. Why did all these companies waste time stealing when they could just have one bot make data for the next bot to train on? Infinite data!

[–] Honytawk@feddit.nl 1 points 1 day ago

I believe it won't work.

They would have to change so much info that won't make a coherent whole. So many alternative facts that clash with so many other aspects of life. So asking about any of it would cause errors because of the many conflicts.

Sure it might work for a bit, but it would quickly degrade and will be so much slower than other models since it needs to error correct constantly.

An other thing is that their training data will also be very limited, and they would have to check every single other one thoroughly for "false info". Increasing their manual labour.

load more comments
view more: ‹ prev next ›