RagingHungryPanda

joined 2 years ago
[–] RagingHungryPanda@lemm.ee 2 points 1 day ago

I posted an edit to the post on how I i solved it. Your feedback was helpful in the solution!

[–] RagingHungryPanda@lemm.ee 2 points 1 day ago

I think so. One issue i ran in to is that trying to go anywhere would land on the ui page. If I put npm on ports 80/443, then the UI needs to be elsewhere so I can access it. It shouldn't be too hard, I hope

[–] RagingHungryPanda@lemm.ee 1 points 1 day ago (3 children)

Hmm, i may have to get my admin ui's off of ports 80/443 and port forward with NPM on those ports instead. The reason I was using nginx on the router was so the server could keep the UI on the normal ports and Nginx elsewhere.

I think then I could remove the router Nginx entries and add the DNS rewrite

[–] RagingHungryPanda@lemm.ee 1 points 1 day ago* (last edited 1 day ago) (1 children)

There's an idea. I may play with it tomorrow.bill be back on the road, so I'm not keen on doing too much when I can't fix it haha. But right now the router is DNS, so when I did traceroute there was only one stop. But it was the SSL termination that got me. I might be able to download the stream module for Nginx. I think if I wanted to be optimal I may have to totally restructure that part....

Edit: thank you for the feedback, you've given me something to think about

[–] RagingHungryPanda@lemm.ee 1 points 2 days ago

You got me on to something. It looks like Nginx can be configured for tls passthrough https://serversforhackers.com/c/tcp-load-balancing-with-nginx-ssl-pass-thru

[–] RagingHungryPanda@lemm.ee 1 points 2 days ago (3 children)

I think you might be right on that. I was originally not wanting to do any port forwarding on the router, but I may have to

[–] RagingHungryPanda@lemm.ee 1 points 2 days ago (2 children)

So even if I remove the SSL cert lines from the router Nginx config, it seems to be applying the router's SSL certificate. I commented out some lines to use a certificate for my root domain, but I'd rather NPM handle that, which doesn't seem to be happening.

 

I have a gl-inet router on which I have an nginx config to send traffic to Nginx Proxy Manager and DDNS with cloudflare.

I'm trying to get some kind of local dns set up so that if I'm on the local network, traffic stays within the network. The problem that I'm running in to is SSL certificates. NPM (on the server) is handling those and I thought that what I could do is go into the AdGuard Home (on the gl-inet router) config and add a dns rewrite to point to the router and traffic would flow as it normally does.

This DOES work, technically. traceroute shows only one hop for any of my subdomains, ie files.mydomain.com.

But I cannot actually get access in a browser because the ssl certificates are not set up.

It seems like options are: manually copy certificates from the server to the router (not ideal), or don't do it at all. I notice that if I go to the service by ip address, it'll change the address to the domain name. Eg going to 192.168.8.111:30027 -> files.mydomain.com.

This isn't a HUGE deal, but it's not preferable. How have you all solved this?

Edit: I solved the issue in probably the most hilarious way. I was trying to get the forwarding and everything set up and then borked my routers firewall so bad I couldn't get to the outside at all, so I did a hard reset.

I then moved my admin UI ports up by one each (81/444), re-set up Goodcloud, ddns, Wireguard server on the router, then set up port forwarding for 80/443 on the router to 80/443 on the trunas server. I switched NPM to listen on those ports (since I moved the web UI to different ports), then added Adguard Home DNS rewrites. It's now all working as expected.

Local traffic only has one hop and is accessible without SSL warnings, and same for WAN traffic. Thank you all for the help!

[–] RagingHungryPanda@lemm.ee 2 points 3 days ago

I thought you could put the full URL and find it that way as well, but if you mean you want to find it by title, that should work normally, unless i misunderstood

[–] RagingHungryPanda@lemm.ee 2 points 4 days ago (1 children)

There is also a thing where you'll want undetermined to be selected as well

[–] RagingHungryPanda@lemm.ee 3 points 1 week ago

I just got cactus comments working on my writefreely blog. Cactus comments needs matrix, so I got matrix with element set up. It was an incredibly frustrating journey of learning, but it turns out the final bit wasn't too complicated. I'm running on TruNas, which I hear you shouldn't do, but it's too late haha.

I also got the whatsapp bridge set up, but it doesn't look like matrix supports disappearing messages, so I left some rooms it created b/c one of the larger groups easily took up a gig of storage before disappearing messages was turned on.

Anyway, writefreely has federated cactus comments that you can currently only sign in with a matrix account lol. I can try and change that later. I host the photos with picsur.

I think that I need to collect my notes and memories and put it into a blog post, but I don't really want to turn my blog (if I even keep up with it) into a blog about how to self host a blog.

In hindsight, I don't think the comments were worth it, but whatever.

[–] RagingHungryPanda@lemm.ee 8 points 1 week ago

Good things can't happen anymore. You might also say that bad things can't happen either, but if it's over then there's no opportunity at all. Life can change as long as it's there.

[–] RagingHungryPanda@lemm.ee 1 points 1 week ago

I learned to be sensitive and expensive anyway. It turns out men and women like it. I tell my friends what they mean to me. I tell people what I'm thinking or doing. The women I'm with think it's hot. One straight up told me, "You're in your masculine but in touch with your feminine. That's hot."

I'm open. I look people in the eye and hold eye contact. I exist genuinely. But it took a long time to get there

 

I previously posted about an issue where the nginx container for the Collabora application logs a GET to /robots.txt every 10 seconds. I tried modifying the files in the container, but they were reset on restart. I also tried to run the container with --log-driver=none, but was unsuccessful. Despite being a software dev, I'm new to the homelab world and trunas.

I solved it by changing the docker image and then committing those changes. The change I made was to set access_log off; in the nginx config. I did it at the server root because I don't really care about those logs for this app, but it could be done on the location level.

Here's how I did it: Here's the reference SO post that I used: https://stackoverflow.com/a/74515438

What I did was I shelled into the image:

  • sudo docker exec -it ix-collabora-nginx-1 bash
  • apt update && apt install vim
  • vi /etc/nginx/nginx.conf and add the access_log off;
    • if you're not familiar with vim, arrow key to the line you want then press 'a' to enter "append mode". Make your change, then esc, :wq!. You need the ! because the file is read only
  • apt remove vim
  • exit
  • sudo docker commit <image id>
  • sudo docker restart ix-collabora-nginx-1
 

I'm running TruNas Scale with a docker image for NextCloud and Collabora. Under Collabora, the nginx application is logging a GET to robots.txt about every second and I'm having a hard time filtering this out because it looks like the conf files for nginx get replaced on every restart. I also tried mounting my own version of the nginx.conf file, but that didn't reflect any changes.

view more: next ›