WbrJr

joined 2 years ago
[–] WbrJr@lemmy.ml 1 points 4 months ago (1 children)

Sorry, i thought i was clear.

I used the proxmox ve helper script from here: https://tteck.github.io/Proxmox/ to install ha os.

The local domain does not get resolved how it should, i hoped someone here might have hosted ha in proxmox themselves or ran into a similar problem, or could give me a hint what to check

[–] WbrJr@lemmy.ml 1 points 4 months ago (1 children)

Thanks for the advice, i was thinking about it anyways :) i used the proxmox ve helper scripts: https://tteck.github.io/Proxmox/ to install it

 

Hi there, I just installed Proxmox on my home server and like the idea a lot, but there is a noticable learning curve. I used this wonderful website and the provided link for home assistant os.

Usually home assistant is available at homeassistant.local without any configuration, i think its called mdns? But on my setup, homeassistant.local does not work for me, on any device, but the ip does.

So i suspect some settings in the proxmox firewall stops the ha vm mdns service from creating an entry in my router (fritzbox). I could not find any useful information about this though, and AI gave me the usual not quite helpful advice.

I hope you have some tips what i can check. Thanks a lot!

ps: I want to host caddy as a reverse proxy on the server some day. Does it make more sense to host a dns server as well and use caddy to forward to the ip?

 

Update: I was overwhelmed by settings. After some more research and thinking I got it working. My dns was set up incorrectly, i referenced the container with the wrong name (the name of the container is not the container_name, but the name of the service in the docker compose file). I then had some other issues with port collisions but could resolve them by killing (docker stop) thingsboard and restarting all services.

So: problem solved! thanks for the answers though!

Hi! I have a server with static ip, that runs docker with caddy and thingsboard (iot dashboard). I have my domain, that points to the servers ip (both ipv4 and ipv6). (I tried using with "www" and with wilcard "*" in the A and AAAA records)

Thingsboard can be reached in the browser via ip:8080, or domain.com:8080 (or with the wildcard "*" set in DNS records with (anything).domain.com:8080). It is set up this way by the creators, where i got the compose file (without caddy) guide here. So i guess no routing is done via caddy.

the caddyfile looks like this:

thingsboard.domain.com {
	tls internal
	reverse_proxy thingsboard:8080
}

Thingsboard cant be reached via thingsboard.domain.com which i would be expecting with this config. Below is the compose file. They are all part of the same docker network (they get listed when i inspect the network).

some specific questions:

  • how do i have to setup my dns records, so that all requests to any subdomain get send to caddy and i can do all the routing (from the subdomain to the service) in caddy? What am i missing in the caddyfile
  • can i deactivate the port from the thingsboard container, so it cant be reached via the port from "outside" only from inside the docker network, by caddy?
  • why am i struggling so much with this basic docker and networking stuff "docker is easy, you should try it" :D

Thanks a lot for reading, i hope someone can help! I dont know what to search for to get this working, networking stuff is still a blurr.

Here is the docker compose file:

services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - /srv/caddy/Caddyfile:/etc/caddy/Caddyfile
      - /srv/caddy/site:/srv
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - caddy_network


  kafka:
    restart: unless-stopped
    image: bitnami/kafka:3.8.1
    container_name: kafka
    ports:
      - 9092:9092 #to localhost:9092 from host machine
      - 9093 #for Kraft
      - 9094 #to kafka:9094 from within Docker network
    environment:
      ALLOW_PLAINTEXT_LISTENER: "yes"
      KAFKA_CFG_LISTENERS: "OUTSIDE://:9092,CONTROLLER://:9093,INSIDE://:9094"
      KAFKA_CFG_ADVERTISED_LISTENERS: "OUTSIDE://localhost:9092,INSIDE://kafka:9094"
      KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,CONTROLLER:PLAINTEXT"
      KAFKA_CFG_INTER_BROKER_LISTENER_NAME: "INSIDE"
      KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "false"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: "1"
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: "1"
      KAFKA_CFG_PROCESS_ROLES: "controller,broker" #KRaft
      KAFKA_CFG_NODE_ID: "0" #KRaft
      KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER" #KRaft
      KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "0@kafka:9093" #KRaft
    networks:
      - caddy_network
    volumes:
      - /srv/thingsboard/kafka-data:/bitnami
  mytb:
    restart: unless-stopped
    container_name: thingsboard
    image: "thingsboard/tb-postgres"
    depends_on:
      - kafka
    ports:
      - "8080:9090"
      - "1883:1883"
      - "7070:7070"
      - "5683-5688:5683-5688/udp"
    environment:
      TB_QUEUE_TYPE: kafka
      TB_KAFKA_SERVERS: kafka:9094
    networks:
      - caddy_network
    volumes:
      - /srv/thingsboard/.mytb-data:/data
      - /srv/thingsboard/.mytb-logs:/var/log/thingsboard



#general networks
networks:
    caddy_network:
      driver: bridge
      ipam:
        config:
          - subnet: 172.20.0.0/24


#general Volumes:
volumes:
  caddy_data:
  caddy_config:
  kafka-data:
    driver: local
 

Hi! I am trying to set up a wireguard client in docker. I use the linuxserver image, I it running in server mode on a different machine (exactly the same ubuntu version) and i can login with my laptop to the wireguard server, but the docker wg-client has problems, i hope someone has an idea :)

The client docker container has trouble starting and throws this error: [___](modprobe: FATAL: Module ip6_tables not found in directory /lib/modules/6.8.0-47-generic ip6tables-restore v1.8.10 (legacy): ip6tables-restore: unable to initialize table 'raw' Error occurred at line: 1 Try 'ip6tables-restore -h' or 'ip6tables-restore --help' for more information. )

I copied the config to the server with the wg server running, it has the same problem with the client. I can ping google.com from inside the server container, but not from inside the client container. Here is the output of the 'route' cmd from the client:Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 172.18.0.0 * 255.255.0.0 U 0 0 0 eth0

I searched for a solution quite a bit, but cant seem to find something that works. changed the .yml compose file according to some suggestions but without success.

I tried to install the missing module but could not get it working.

Its a completely clean install of ubuntu 24.04.1 LTS, Kernel: Linux 6.8.0-47-generic.

here is the compose file, in case its needed, it should be exact same one as provided by linux-server in their github:

compose file:

services:
  wireguard:
    image: lscr.io/linuxserver/wireguard:latest
    container_name: wireguard-client
    cap_add:
      - NET_ADMIN
      - SYS_MODULE #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
#      - SERVERURL=wireguard.domain.com #optional
#      - SERVERPORT=51820 #optional
#      - PEERS=1 #optional
#      - PEERDNS=auto #optional
#      - INTERNAL_SUBNET=10.13.13.0 #optional
#      - ALLOWEDIPS=0.0.0.0/0 #optional
#      - PERSISTENTKEEPALIVE_PEERS= #optional
#      - LOG_CONFS=true #optional
    volumes:
      - /srv/wireguard/config:/config
#      - /lib/modules:/lib/modules #optional
    ports:
      - 51820:51820/udp
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    restart: unless-stopped

here is the complete error log from the wg-client docker:

error

[migrations] started
[migrations] no migrations found
usermod: no changes
───────────────────────────────────────

      ██╗     ███████╗██╗ ██████╗
      ██║     ██╔════╝██║██╔═══██╗
      ██║     ███████╗██║██║   ██║
      ██║     ╚════██║██║██║   ██║
      ███████╗███████║██║╚██████╔╝
      ╚══════╝╚══════╝╚═╝ ╚═════╝

   Brought to you by linuxserver.io
───────────────────────────────────────

To support the app dev(s) visit:
WireGuard: https://www.wireguard.com/donations/

To support LSIO projects visit:
https://www.linuxserver.io/donate/

───────────────────────────────────────
GID/UID
───────────────────────────────────────

User UID:    1000
User GID:    1000
───────────────────────────────────────
Linuxserver.io version: 1.0.20210914-r4-ls55
Build-date: 2024-10-10T11:23:38+00:00
───────────────────────────────────────
    
Uname info: Linux ec3813b50277 6.8.0-47-generic #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024 x86_64 GNU/Linux
**** It seems the wireguard module is already active. Skipping kernel header install and module compilation. ****
**** Client mode selected. ****
[custom-init] No custom files found, skipping...
**** Disabling CoreDNS ****
**** Found WG conf /config/wg_confs/peer1.conf, adding to list ****
**** Activating tunnel /config/wg_confs/peer1.conf ****
[#] ip link add peer1 type wireguard
[#] wg setconf peer1 /dev/fd/63
[#] ip -4 address add 10.13.13.2 dev peer1
[#] ip link set mtu 1420 up dev peer1
[#] resolvconf -a peer1 -m 0 -x
s6-rc: fatal: unable to take locks: Resource busy
[#] wg set peer1 fwmark 51820
[#] ip -6 route add ::/0 dev peer1 table 51820
[#] ip -6 rule add not fwmark 51820 table 51820
[#] ip -6 rule add table main suppress_prefixlength 0
[#] ip6tables-restore -n
modprobe: FATAL: Module ip6_tables not found in directory /lib/modules/6.8.0-47-generic
ip6tables-restore v1.8.10 (legacy): ip6tables-restore: unable to initialize table 'raw'
Error occurred at line: 1
Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
[#] resolvconf -d peer1 -f
s6-rc: fatal: unable to take locks: Resource busy
[#] ip -6 rule delete table 51820
[#] ip -6 rule delete table main suppress_prefixlength 0
[#] ip link delete dev peer1
**** Tunnel /config/wg_confs/peer1.conf failed, will stop all others! ****
**** All tunnels are now down. Please fix the tunnel config /config/wg_confs/peer1.conf and restart the container ****
[ls.io-init] done.

Thanks a lot. I appreciate every input!