Shadow

joined 2 years ago
MODERATOR OF
[–] Shadow@lemmy.ca 5 points 1 month ago* (last edited 1 month ago) (1 children)

You've made a virtual disk on the zfs. The vm will never see the zfs, that's managed entirely by the host.

Yes you'll want to make a normal partition inside that virtual disk.

With vms you can't just access the host zfs, it's always abstracted. If you use lxc containers on proxmox then you can bind the zfs into the container (google it for steps, it's not in the Gui)

[–] Shadow@lemmy.ca 37 points 1 month ago (1 children)

To protect the daughters election chances in the future.

[–] Shadow@lemmy.ca 1 points 1 month ago (1 children)

Tried, I believe it it got cancelled because it's illegal

[–] Shadow@lemmy.ca 35 points 1 month ago (4 children)

He's not paying the military either, not sure how well that'll work for him.

[–] Shadow@lemmy.ca 6 points 1 month ago* (last edited 1 month ago)

Most people have no idea who MD are at this point, that merger was 30 years ago. That plus 30 years of boeing maintenance plans and parts, I think it's reasonable to call them boeing now

[–] Shadow@lemmy.ca 13 points 1 month ago (1 children)

I really like the shuttlecraft, I wish they'd sell that on its own.

[–] Shadow@lemmy.ca 3 points 1 month ago

I'm sorry but that looks like ass and it's $400 usd. If it looked better and was half the price, I'd be all over it.

[–] Shadow@lemmy.ca 10 points 1 month ago

It's worth going over to the reddit aviation thread for more pics and video.

https://old.reddit.com/r/aviation/comments/1oombcc/ups2976_crash_megathread/

They left their entire engine on the runway and probably had other failures too, there was no chance to recover. Hopefully the body count doesn't rise much from people on the ground.

RIP.

[–] Shadow@lemmy.ca 25 points 1 month ago

She's been disrespected enough already by the rich and powerful, how about we don't do it too?

[–] Shadow@lemmy.ca 33 points 1 month ago (4 children)

I've been on Samsung for years and I don't get this argument anymore. There's no ads on my phone, and one ui is pretty smooth.

I do use my own launcher so maybe that covers it up, but new Samsung isn't like what they were a long time ago.

[–] Shadow@lemmy.ca 3 points 1 month ago (2 children)

Are you running it in docker? If so, did you bind the mount properly? Exec into sh in the container and manually test the folder.

If not docker, su to the immich user and test the same thing.

[–] Shadow@lemmy.ca 11 points 1 month ago* (last edited 1 month ago) (4 children)

Like you have it running and added but it's not scanning?

I can check my config later today if nobody else replies sooner.

https://docs.immich.app/features/libraries/

External libraries use import paths to determine which files to scan. Each library can have multiple import paths so that files from different locations can be added to the same library. Import paths are scanned recursively, and if a file is in multiple import paths, it will only be added once.

Have you double checked your folder permissions?

1
submitted 10 months ago* (last edited 10 months ago) by Shadow@lemmy.ca to c/main@lemmy.ca
 

I'm curious if anyone here actually finds value in the reddit posts brought over by lemmit.online, since I'd like to defederate from it otherwise.

It feels actively harmful to lemmy, since so many of the posts it brings over are questions that the original poster will never see. It encourages a conversation that will never happen, so if someone does reply they're going to feel disengaged.

The bot rarely gets any upvotes or engagement, and I suspect a majority of people (like myself) have just blocked it. TBH I forgot it existed until Tesseract showed me its posts again.

1
submitted 10 months ago* (last edited 10 months ago) by Shadow@lemmy.ca to c/main@lemmy.ca
 

Hi everyone!

Tesseract is now available as an alternative front end at https://tess.lemmy.ca/

1
submitted 10 months ago* (last edited 10 months ago) by Shadow@lemmy.ca to c/main@lemmy.ca
 

Hello everyone!

I'll be taking the site down for two maintenance windows this week to complete our server migration.

  • Weds Jan 29th - 09:00 - 11:00 PT (12:00 - 14:00 ET)
  • Thurs Jan 30th - 09:00 - 11:00 PT (12:00 - 14:00 ET)

During the first window I'll be migrating us from OVH to our new dedicated hardware. After this migration there will likely be some temporarily broken images, as it takes approximately 8 hours to resync our object storage from OVH.

This is a major change and despite my testing, may have some unintended side effects. If you run into any problems that aren't just a broken image, please let us know.

The second maintenance window is to migrate our pict-rs database from it's local sled-db into our primary postgres DB. This is a much smaller change but since pict-rs checks every image as it goes through them, it takes about 1.5 hours.

As usual, you can check https://status.lemmy.ca/ for updates.

 

Hello everyone, we're long overdue for an update on how things have been going!

Finances

Since we started accepting donations back in July we've received a total of $1350, as well as $1707 in older donations from smorks. We haven't had any expenses other than OVH (approx $155/mo) since then, leaving us $2152 in the bank.

We still owe TruckBC $1980 for the period he was covering hosting, and I've contributed $525 as well (mostly non-profit registration related stuff, plus domain renewals). We haven't yet discussed reimbursing either of us, we're both happy to build up a contingency fund for a while.

New Server

A few weeks ago, we experienced a ~26-hour outage due to a failed power supply and extremely slow response times from OVH support. This was followed by an unexplained outage the next morning at the same time. To ensure Lemmy’s growth remains sustainable for the long term and to support other federated applications, I’ve donated a new physical server. This will give us a significant boost in resources while keeping the monthly cost increase minimal.

Our system specs today:

  • Undoubtedly the cheapest hardware OVH could buy
  • Intel Xeon E-2386G (6 cores @ 3.5ghz)
  • 32gb of ram
  • 2x 512gb Samsung nvme in raid 1
  • 1gb network
  • $155/month

The new system:

  • Dell R7525
  • AMD EPYC 7763 (64 cores @ 2.45ghz)
  • 1tb of ram
  • 3x 120gb sata ssd (hw raid 1 with a hot spare, for proxmox)
  • 4x 6.4tb nvme (zfs mirrored + striped, for data)
  • 1gb network with a 50mbit commit (See 95th percentile billing)
  • Redundant power supplies
  • Next day hardware support until Aug 2027
  • $166/month + tax

This means instead of renting an entire server and having them be responsible for the hardware, we'll be renting co-location space at a Vancouver datacenter PDF via a 3rd party service provider I know.

These servers are extremely reliable but if there is a failure, either Otter or myself will be able to get access reasonably quickly. We also have full OOB access via idrac, so it's pretty unlikely we'll ever need to go on site.

Server Migration

Phase 1 is currently planned for Jan 29th or 30th and will completely move us out of OVH and onto our own hardware. I'm expecting probably a 2-3 hour outage, followed by an 6-8 hour window where some images may be missing as the object store resyncs. I'll make another follow up post in a week with specifics.

Phases 2+ I'm not 100% decided on yet and have not planned a timeline around. It would get us into a fully redundant (excluding hardware) setup that's easier to scale and manage down the road, but it does add a little bit of complexity.

Let me know if you have any questions or comments, or feedback on the architecture!

1
submitted 11 months ago* (last edited 11 months ago) by Shadow@lemmy.ca to c/main@lemmy.ca
 

Morning all!

I'm going to be taking the site down for about 5 minutes, so that I can get a consistent copy of our databases (postgres + pict-rs sled).

Will do it at about 10am PST.

 

Hey everyone, and happy new year!

Sorry about that super long downtime there. Yesterday (Sunday) morning at 10:03AM PST our server suffered a physical hardware failure, apparently a power supply failure. Unfortunately despite opening a ticket with our hosting vendor (OVH) a few minutes later and them claiming to have 24/7 support, nobody looked at our ticket until this morning when their phone support lines opened and I called them.

They've now replaced a defective power supply and we're back online, after ~26 hours of being offline. Some pretty disappointing response times, to put it nicely.

We're planning to move away from OVH at the end of this month, onto proper enterprise grade hardware that we own and control. This will give us a HUGE boost in server resources and allow us to scale for the foreseeable future, while also giving us the control to resolve problems like this much quicker. Expect another follow up post about this in the next couple weeks once I've put together the migration plan.

Timeline:

  • Jan 5th 10:03am PST - We get alerts to the server being non-responsive.
  • Jan 5th 10:05am PST - I pull up the console via IPMI and it's completely non-responsive. Attempting to power off / on the server or do anything, does not work.
  • Jan 5th 10:15am PST - Initial support ticket created with OVH. I followed up a couple times over the next few hours, and got no response.
  • Jan 6th 6:32am PST - Called OVH, gave them the case number and asked them to investigate
  • Jan 6th 7:34am PST - I get notified they'll start their "intervention" in 15 minutes.
  • Jan 6th 11:04am PST - Call them again, the tech is still working on it and they'll get back to me with an update
  • Jan 6th 11:34am PST - "I was informed by our data centre technician that there is an issue with the power supply unit for the rack on which your server resides. Your server will come back online once they have replaced the power supply."
  • Jan 6th 12:17pm PST - We're back up finally!

Edit on Jan 7th @ 8:40am PST: We just had another outage of about an hour. Investigating with OVH.

 

One of the drives in our server has failed. =( Even though it should be a 10 minute job OVH needs a 2 hour window to replace it.

I've requested they schedule it for Tuesday from 8am - 10AM PST. Hopefully it'll be reasonably quick, but expect cloudflare tunnel errors while they perform the work.

 

Hey All!

I'm going to upgrade us to 0.19.7 tomorrow (Sunday Nov 24th) around 10am PST. I don't expect significant downtime, but expect a few minutes at least.

Update: All done!

 
view more: ‹ prev next ›