this post was submitted on 08 Jul 2025
92 points (96.9% liked)

Selfhosted

49240 readers
598 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you're not blowing up the dynamic volume upon restart.

In my case I changed this:

  immich-machine-learning:
    ...
    volumes:
      - model-cache:/cache

To that:

  immich-machine-learning:
    ...
    volumes:
      - ./cache:/cache

I no longer have to wait uncomfortably long when I'm trying to show off Smart Search to a friend, or just need a meme pronto.

That'll be all.

you are viewing a single comment's thread
view the rest of the comments
[–] Showroom7561@lemmy.ca 3 points 21 hours ago (2 children)

That’s a Celeron right?

Yup, the Intel J4125 Celeron 4-Core CPU, 2.0-2.7Ghz.

I switched to the ViT-SO400M-16-SigLIP2-384__webli model, same as what you use. I don't worry about processing time, but it looks like a more capable model, and I really only use immich for contextual search anyway, so that might be a nice upgrade.

[–] avidamoeba@lemmy.ca 1 points 20 hours ago (1 children)

Did you run the Smart Search job?

[–] Showroom7561@lemmy.ca 2 points 16 hours ago (1 children)
[–] avidamoeba@lemmy.ca 1 points 13 hours ago (1 children)

Let me know how inference goes. I might recommend that to a friend with a similar CPU.

[–] Showroom7561@lemmy.ca 2 points 4 hours ago (1 children)

I decided on the ViT-B-16-SigLIP2__webli model, so switched to that last night. I also needed to update my server to the latest version of Immich, so a new smart search job was run late last night.

Out of 140,000+ photos/videos, it's down to 104,000 and I have it set to 6 concurrent tasks.

I don't mind it processing for 24h. I believe when I first set immich up, the smart search took many days. I'm still able to use the app and website to navigate and search without any delays.

[–] avidamoeba@lemmy.ca 1 points 3 hours ago (1 children)

Let me know how the search performs once it's done. Speed of search, subjective quality, etc.

[–] Showroom7561@lemmy.ca 1 points 2 hours ago

Search speed was never an issue before, and neither was quality. My biggest gripe is not being able to sort search by date! If I had that, it would be perfect.

But I'll update you once it's done (at 97,000 to go... )

[–] iturnedintoanewt@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

What's your consideration for choosing this one? I would have thought ViT-B-16-SigLIP2__webli to be slightly more accurate, with faster response and all that while keeping a slightly less RAM consumption (1.4GB less I think).

[–] Showroom7561@lemmy.ca 2 points 16 hours ago

Seemed to be the most popular. LOL The smart search job hasn't been running for long, so I'll check that other one out and see how it compares. If it looks better, I can easily use that.