this post was submitted on 21 Mar 2025
1382 points (99.4% liked)

Technology

67151 readers
4056 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] 4am@lemm.ee 295 points 1 day ago (2 children)

Imagine how much power is wasted on this unfortunate necessity.

Now imagine how much power will be wasted circumventing it.

Fucking clown world we live in

[–] Demdaru@lemmy.world 54 points 1 day ago (12 children)

On on hand, yes. On the other...imagine frustration of management of companies making and selling AI services. This is such a sweet thing to imagine.

[–] halfapage@lemmy.world 84 points 1 day ago (2 children)

My dude, they'll literally sell services to both sides of the market.

load more comments (11 replies)
load more comments (1 replies)
[–] AtomicHotSauce@lemmy.world 208 points 1 day ago (6 children)

That's just BattleBots with a different name.

[–] aviationeast@lemmy.world 57 points 1 day ago (1 children)
[–] IrateAnteater@sh.itjust.works 37 points 1 day ago (1 children)

Ok, I now need a screensaver that I can tie to a cloudflare instance that visualizes the generated "maze" and a bot's attempts to get out.

[–] x1gma@lemmy.world 13 points 1 day ago

You probably just should let an AI generate that.

load more comments (5 replies)
[–] RelativeArea1@sh.itjust.works 157 points 1 day ago* (last edited 1 day ago) (4 children)

this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.

[–] melpomenesclevage@lemmy.dbzer0.com 49 points 1 day ago* (last edited 1 day ago) (18 children)

nothing can be improved while capitalism or authority exist; all improvement will be seized and used to oppress.

[–] morrowind@lemmy.ml 24 points 1 day ago (12 children)

How can authority not exist? That's staggeringly broad

load more comments (12 replies)
load more comments (17 replies)
[–] dual_sport_dork@lemmy.world 14 points 1 day ago* (last edited 1 day ago) (8 children)

Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots -- which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot -- and simply IP ban them. This is considerably simpler, and doesn't require an entire nuclear plant powered AI to combat the opposition's nuclear plant powered AI.

In fact, anybody who doesn't exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to "learn" about products rather than sticking to its own domain.

Fuck 'em. Route those motherfuckers right to /dev/null.

load more comments (8 replies)
load more comments (2 replies)
[–] peoplebeproblems@midwest.social 112 points 1 day ago (1 children)

Not exactly how I expected the AI wars to go, but I guess since we're in a cyberpunk world, we take what we get

[–] rocket_dragon@lemmy.dbzer0.com 70 points 1 day ago (3 children)

Next step is an AI that detects AI labyrinth.

It gets trained on labyrinths generated by another AI.

So you have an AI generating labyrinths to train an AI to detect labyrinths which are generated by another AI so that your original AI crawler doesn't get lost.

It's gonna be AI all the way down.

[–] finitebanjo@lemmy.world 23 points 1 day ago (2 children)

All the while each AI costs more power than a million human beings to run, and the world burns down around us.

[–] LainTrain@lemmy.dbzer0.com 17 points 1 day ago (17 children)

The same way they justify cutting benefits for the disabled to balance budgets instead of putting taxes on the rich or just not giving them bailouts, they will justify cutting power to you before a data centre that's 10 corporate AIs all fighting each other, unless we as a people stand up and actually demand change.

load more comments (17 replies)
load more comments (1 replies)
[–] brucethemoose@lemmy.world 14 points 1 day ago* (last edited 1 day ago)

LLMs tend to be really bad at detecting AI generated content. I can’t imagine specialized models are much better. For the crawler, it’s also exponentially more expensive and more human work, and must be replicated for every crawler since they’re so freaking secretive.

I think the hosts win here.

load more comments (1 replies)
[–] oldfart@lemm.ee 107 points 1 day ago (1 children)

So the web is a corporate war zone now and you can choose feudal protection or being attacked from all sides. What a time to be alive.

[–] theparadox@lemmy.world 14 points 1 day ago

There is also the corpo verified id route. In order to avoid the onslaught of AI bots and all that comes with them you'll need to sacrifice freedom, anonymity, and privacy like a good little peasant to prove you aren't a bot.. and so will everyone else. You'll likely be forced to deal with whatever AI bots are forced upon you while within the walls but better an enemy you know I guess?

[–] kandoh@reddthat.com 83 points 1 day ago (1 children)

Burning 29 acres of rainforest a day to do nothing

[–] digdilem@lemmy.ml 65 points 1 day ago (2 children)

Surprised at the level of negativity here. Having had my sites repeatedly DDOSed offline by Claudebot and others scraping the same damned thing over and over again, thousands of times a second, I welcome any measures to help.

[–] AWittyUsername@lemmy.world 30 points 1 day ago

I think the negativity is around the unfortunate fact that solutions like this shouldn't be necessary.

load more comments (1 replies)
[–] AnthropomorphicCat@lemmy.world 65 points 1 day ago (1 children)

So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍

[–] brucethemoose@lemmy.world 13 points 1 day ago* (last edited 1 day ago)

The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.

Doubly so once inference goes more on-device.

Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.

AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.

[–] umbraroze@lemmy.world 52 points 19 hours ago (3 children)

I have no idea why the makers of LLM crawlers think it's a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than "well, we just don't want you to do that". They're usually more like "why would you even do that?"

Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said "please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)". Again: Why would anyone index those?

[–] phoenixz@lemmy.ca 27 points 15 hours ago

Because you are coming from the perspective of a reasonable person

These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already

load more comments (2 replies)
[–] quack@lemmy.zip 48 points 1 day ago* (last edited 1 day ago)

Generating content with AI to throw off crawlers. I dread to think of the resources we’re wasting on this utter insanity now, but hey who the fuck cares as long as the line keeps going up for these leeches.

[–] TorJansen@sh.itjust.works 38 points 1 day ago (2 children)

And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.

load more comments (2 replies)
[–] biofaust@lemmy.world 36 points 22 hours ago (1 children)

I guess this is what the first iteration of the Blackwall looks like.

[–] owl@infosec.pub 16 points 21 hours ago

Gotta say "AI Labyrinth" sounds almost as cool.

[–] gmtom@lemmy.world 34 points 1 day ago (4 children)

"I used the AI to destroy the AI"

load more comments (4 replies)
[–] surph_ninja@lemmy.world 31 points 22 hours ago

I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.

[–] Empricorn@feddit.nl 24 points 1 day ago (1 children)

So we're burning fossil fuels and destroying the planet so bots can try to deceive one another on the Internet in pursuit of our personal data. I feel like dystopian cyberpunk predictions didn't fully understand how fucking stupid we are...

load more comments (1 replies)
[–] XeroxCool@lemmy.world 24 points 1 day ago (2 children)

Will this further fuck up the inaccurate nature of AI results? While I'm rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.

[–] ladel@feddit.uk 32 points 1 day ago* (last edited 1 day ago) (6 children)

The article says it's not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.

E:

It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.

and the data for the LLM is now salted with procedural garbage. it's great!

load more comments (5 replies)
[–] melpomenesclevage@lemmy.dbzer0.com 14 points 1 day ago (6 children)

If you're dumb enough and care little enough about the truth, I'm not really going to try coming at you with rationality and sense. I'm down to do an accelerationism here. fuck it. burn it down.

remember; these companies all run at a loss. if we can hold them off for a while, they'll stop getting so much investment.

load more comments (6 replies)
[–] drmoose@lemmy.world 24 points 1 day ago* (last edited 1 day ago) (2 children)

Considering how many false positives Cloudflare serves I see nothing but misery coming from this.

[–] Dave@lemmy.nz 20 points 1 day ago (2 children)

In terms of Lemmy instances, if your instance is behind cloudflare and you turn on AI protection, federation breaks. So their tools are not very helpful for fighting the AI scraping.

load more comments (2 replies)
load more comments (1 replies)
[–] Deebster@infosec.pub 19 points 1 day ago* (last edited 1 day ago) (1 children)

So they rewrote Nepenthes (or Iocaine, Spigot, Django-llm-poison, Quixotic, Konterfai, Caddy-defender, plus inevitably some Rust versions)

Edit, but with ✨AI✨ and apparently only true facts

load more comments (1 replies)
[–] weremacaque@lemmy.world 18 points 1 day ago* (last edited 1 day ago) (1 children)

You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.

load more comments (1 replies)
load more comments
view more: next ›