Getting 404 Error before getting to registration

Good Morning, I’ve been trying to get this up and running and have been having trouble with even getting past the starting line. I’m using Docker (the compose file from the example or part of it anyway) and caddy, again using the example set up with modifications to fit my specific instances and naming conventions. I’ve tried a great many variations on this particular compose file and have ensured via the shell that all of the containers can communicate with one another and that Caddy can communicate with the containers. The issue is that after it all comes up, I just get a 404 response from the api with “message: not found” when navigating to the frontend. The front end container can reach the api container via it’s name/ I’ve set the URL environment variable, and the URL with /api/v1/info returns the expected information regardless of set up.

Here’s the Docker Compose file I’m using.

version: '3'

    container_name: vikunja-db
    image: mariadb:10
    command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
      MYSQL_ROOT_PASSWORD: supersecret
      MYSQL_USER: vikunja
      MYSQL_PASSWORD: secret
      MYSQL_DATABASE: vikunja
      - ./db:/var/lib/mysql
    restart: unless-stopped
      - reverseproxy-nw
    container_name: vikunja-api
    image: vikunja/api
      - VIKUNJA_DATABASE_HOST=vikunja-db
      - VIKUNJA_SERVICE_JWTSECRET=doublesecret
      - VIKUNJA_LOG_HTTP=stdout
      - VIKUNJA_LOG_EVENTS=stdout
      - ./files:/app/vikunja/files
      - vikunja-db
    restart: unless-stopped
      - reverseproxy-nw
    container_name: vikunja-frontend
    image: vikunja/frontend
      - VIKUNJA_API_URL=http://vikunja-api:3456/api/v1
      - VIKUNJA_LOG_HTTP=stdout
    restart: unless-stopped
      - reverseproxy-nw

    external: true

Here’s the caddy portion which works for all the other services I’m running.

@vikunja host 
handle @vikunja {
	import localSubnets
    reverse_proxy @localSubnets /api/* http://vikunja-api:3456
    reverse_proxy @localSubnets /.well-known/* http://vikunja-api:3456
    reverse_proxy @localSubnets /dav/* http://vikunja-api:3456
    reverse_proxy @localSubnets http://vikunja-frontend:80

Any thoughts? Am I missing something simple? If you guys need any additional information or have any questions about the setup, I’m happy to provide whatever you might need. It’s also worth noting that the only logs I see despite the additional logging set up is a 404 response from the api server. I also get a 401 if I just try to navigate to (URL)/api/v1/ by itself. The Local Subnets from the caddy bit just limits access to my internal networks, but I can remove that if it seems like it’d cause an issue.

OK, I have been fiddling with this for quite a while, and have found that the issue comes from the reverse proxy. Specifically, the @localSubnets portion which limits access to internal networks only. Even if this is expanded to include all RFC1918 addresses that still doesn’t work. Only if that portion is removed does this work as expected. Is there some public access component that is required, or is there some sort of odd way that the frontend interacts/calls the api that makes it appear like it’s not coming from an internal IP address?

The frontend does not do any requests by itself. The browser where you’re accessing the frontend makes the requests to the api. This means your browser needs to be able to reach the api.

OK, so just removing the limiting list from the API seems to work, but I still don’t understand exactly why. The same list on the frontend and on all of my other containers that are behind this proxy work fine with that limiter. It just throws a 404 when the API is behind that list. I wonder if it’s a header rewrite issue or something.

Does Vikunja return that 404? Or Caddy?
Anything in Vikunja’s logs?

Vikunja returns the 404, Caddy sometimes throws a 502 with

{"level":"error","ts":1704462528.5329187,"logger":"http.log.error","msg":"dial tcp: lookup *: no such host","request":{"remote_ip":"","remote_port":"53173","client_ip":"","proto":"HTTP/3.0","method":"GET","host":"","uri":"/assets/","headers":{"Sec-Fetch-Site":["same-origin"],"Sec-Fetch-Dest":["empty"],"Accept-Encoding":["gzip, deflate, br"],"Accept-Language":["en-US,en;q=0.9"],"Sec-Fetch-Mode":["no-cors"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36"]},"tls":{"resumed":true,"version":772,"cipher_suite":4867,"proto":"h3","server_name":""}},"duration":0.000254829,"status":502,"err_id":"idzxmnr5r","err_trace":"reverseproxy.statusError (reverseproxy.go:1267)"}

The frontend has no logs, but the api just shows a 404 like so

2024-01-05T14:52:30.375769757Z: WEB ▶ GET 404 / 21.659µs - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36 2024-01-05T14:52:32.581643443Z: WEB ▶ GET 404 /sw.js 25.411µs - Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36

This looks like the request is made to the api instead of the frontend.

FWIW, I still haven’t been able to figure this out, I’ve just allowed the traffic from anywhere for the time being and just don’t have a public DNS entry for it so it’s unreachable from the outside world. One day I’ll try to set up something that can sniff internal Docker traffic and see exactly what requests are being made. Thanks for the assistance none the less.