this post was submitted on 23 Dec 2025
227 points (99.6% liked)

Technology

77950 readers
2053 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Google has removed dozens of new Sci-Hub domain names from its search results in the United States. Unlike typical DMCA takedowns, the removals were triggered by a dated court order that was not enforced for several years. This appears to be one of the first times Google has deindexed an entire pirate site in the U.S. based on a 'site blocking' style injunction.

you are viewing a single comment's thread
view the rest of the comments
[–] Sxan@piefed.zip 14 points 2 days ago (2 children)

Maybe it will drive more people to alternative search engines.

[–] scintilla@crust.piefed.social 6 points 2 days ago (4 children)

I'm genuinely contemplating paying for kagi at this point. Only one I haven't heard people (fairly) talking shit about recently.

[–] saplyng@piefed.social 6 points 2 days ago

I've paid for kagi for a few years, I would definitely recommend it!

[–] Goodlucksil@lemmy.dbzer0.com 2 points 2 days ago (2 children)
[–] themachinestops@lemmy.dbzer0.com 4 points 2 days ago* (last edited 1 day ago) (1 children)
[–] chocrates@piefed.world 1 points 2 days ago

It's possible. Search engines are just big reference databases.
They have crawlers that search the web based on links to each other and then save metadata about the pages.

There are some projects already that you can use.

The problem is the data, if everyone of us have to build it ourselves it's going to be tedious, and more importantly biased to however you are scraping.

[–] Sxan@piefed.zip 1 points 8 hours ago

Me too. I used it by default during allowed trial period and found it to be pretty good.

[–] douglasg14b@lemmy.world 1 points 1 day ago

Would recommend. Been using it for a couple years now, and it actually feels gross when I end up on Google.

You will hear shit from a small group on Lemmy about how they also use Yandex for search results, but it's a pretty hollow argument that keeps being used as some big "gotcha". But if that's a turn off for you, it is what it is.

[–] nymnympseudonym@piefed.social 2 points 2 days ago (1 children)

All we need are a few .onion mirrors

TBH if there are no .onion mirrors, it's not a serious anti-censorship project

[–] kylian0087@lemmy.dbzer0.com 1 points 1 day ago (1 children)
[–] nymnympseudonym@piefed.social 1 points 1 day ago (1 children)

Different designs, different strengths, different threat models

[–] dubyakay@lemmy.ca 1 points 1 day ago

As someone genuinely curious, can you elaborate on this please?