this post was submitted on 23 Mar 2025
307 points (99.0% liked)

Technology

68244 readers
4237 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] RejZoR@lemmy.ml 62 points 1 week ago (8 children)

This is Ai poisoning. Blocking it you just make it not learn. Feeding it bullshit poisons its knowledge making it hallucinate.

I also wonder how Ai crawlers know what wasn't already generated by Ai, potentially "inbreeding" knowledge as I call it with Ai hallucinations of the past.

When whole Ai craze began, everything online was human made basically. Not anymore now. It'll just get worse if you ask me.

[–] JustARegularNerd@lemmy.dbzer0.com 18 points 1 week ago (1 children)

Kind of. They're actually trying to avoid this according to the article:

"The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—such as neutral information about biology, physics, or mathematics—to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven)."

[–] Muaddib@sopuli.xyz 5 points 1 week ago

That sucks! What's the point of putting an AI in a maze if you're not going to poison it?

load more comments (6 replies)