this post was submitted on 25 Mar 2025
27 points (88.6% liked)

Selfhosted

45411 readers
365 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

For years I've on and off looked for web archiving software that can capture most sites, including ones that are "complex" with lots of AJAX and require logins like Reddit. Which ones have worked best for you?

Ideally I want one that can be started up programatically or via command line, an opens a chromium instance (or any browser), and captures everything shown on the page. I could also open the instance myself and log into sites and install addons like UBlock Origin. (btw, archiveweb.page must be started manually).

top 13 comments
sorted by: hot top controversial new old
[–] klangcola@reddthat.com 14 points 1 week ago (2 children)

SingleFile is a browser addon to save a complete web page into a single HTML file. SingleFile is a Web Extension (and a CLI tool) compatible with Chrome, Firefox (Desktop and Mobile), Microsoft Edge, Safari, Vivaldi, Brave, Waterfox, Yandex browser, and Opera.

SingleFile can also be integrated with bookmark managers hoarder and linkding browser extensions. So your browser does the capture, which means you are already logged in, have dismissed the cookie banner, solved the capthas or whatever else annoyance is on the webpage.

ArchiveBox and I believe also Linkwarden use SingleFile (but as CLI from the server side) to capture web pages, as well as other tools and formats. This works well for simple/straightforward web pages, but not for annoying we pages with cookie banners, capthas, and other popups.

[–] Showroom7561@lemmy.ca 6 points 1 week ago

Singlefile is the only one that works reliably for me.

Linkwarden would have been awesome, but I had so many issues with it when I was self-hosting. I think it's improved since then, though.

[–] N0x0n@lemmy.ml 5 points 1 week ago (1 children)

For reddit, SingleFile HTML pages can be 20MB per file ! Which is huge for a simple discussion...

To reduce that bloated but still relevant site, redirect to any still working alternative like https://github.com/redlib-org/redlib or old reddit and decrease your file to less than 1MB/file.

[–] klangcola@reddthat.com 1 points 6 days ago

SingleFile provides a faithful representation of the original webpage, so bloated webpages are indeed saved as bloated html files.

On the plus side you're getting an exact copy, but on the downside an exact copy may not be necessary and takes a huge amount of space.

[–] tuxec@infosec.pub 1 points 5 days ago
[–] Xanza@lemm.ee 0 points 1 week ago (1 children)
[–] TheTwelveYearOld@lemmy.world 1 points 1 week ago (1 children)

Doesn't work well for more complex sites.

[–] Xanza@lemm.ee 3 points 1 week ago (2 children)

wget is the most comprehensive site cloner there is. What exactly do you mean by complex? Because wget works for anything static and public... If you're trying to clone compiled source files, like PHP or something, obviously that's not going to work. If that's what you mean by "complex" then just give up, because you can't.

[–] TheTwelveYearOld@lemmy.world 7 points 1 week ago

For instance, I can't download completely youtube pages with videos using wget, but can with pywb (though pywb has issues with sites like reddit).

Not that I would necessarily use it for youtube pages, but that's an example of a complex page with lots of AJAX.

[–] Paragone@piefed.social 0 points 1 week ago (1 children)

There's a "philosopher" who the far-right techbro-oligarchs rely on, whose blog is grey-something-or-other..

I tried using wget & there's a bug or something in the site, so it keeps inserting links-to-other-sites into uri's, so you get bullshit like

grey-something-or-other.substack.com/e/b/a/http://en.wikipedia.org/wiki/etc..

The site apparently works for the people who browse it, but wget isn't succeeding in just cloning the thing.

I want the items that the usable-site is made-of, not endless-failed-requests following recursive errors, forever..

Apparently one has to be ultra-competent to be able to configure all the disincludes & things in the command-line-switches, to get any particular site dealt-with by wget.

Sure, on static-sites it's magic, but on too many sites with dynamically-constructed portions of themselves, it's a damn headache, at times..

_ /\ _

[–] Xanza@lemm.ee 1 points 1 week ago (1 children)

That's not a bug. You literally told wget to follow links, so it did.

[–] Paragone@piefed.social 1 points 6 days ago (1 children)

There ought be a do not follow recursive links switch for it, Hoomin..

_ /\ _

[–] Xanza@lemm.ee 1 points 5 days ago

There is. wget doesn't follow recursive links by default. If it is, you're using an option which is telling it to...