As this will -thanks to me being quite clueless- be a very open question I will start with the setup:
One nginx server on an old Raspi getting ports 80 and 443 routed from the access point and serving several pages as well as some reverse proxies for other sevices.
So a (very simplified) nginx server-block that looks like this:
# serve stuff internally (without a hostname) via http
server {
listen 80 default_server;
http2 on;
server_name _;
location / {
proxy_pass http://localhost:5555/;
\# that's where all actual stuff is located
}
}
# reroute http traffic with hostname to https
server {
listen 80;
http2 on;
server_name server_a.bla;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl default_server;
http2 on;
server_name server_a.bla;
ssl_certificate A_fullchain.pem;
ssl_certificate_key A_privkey.pem;
location / {
proxy_pass http://localhost:5555/;
}
}
#actual content here...
server {
listen 5555;
http2 on;
root /srv/http;
location / {
index index.html;
}
location = /page1 {
return 301 page1.html;
}
location = /page2 {
return 301 page2.html;
}
#reverse proxy for an example webdav server
location /dav/ {
proxy_pass http://localhost:6666/;
}
}
Which works well.
And intuitively it looked like putting Anubis into the chain should be simple. Just point the proxy_pass (and the required headers) in the “port 443”-section to Anubis and set it to pass along to localhost:5555 again.
Which really worked just as expected… but only for server_a.bla, server_a.bla/page1 or server_a.bla/page2.
server_a.bla/dav just hangs and hangs, to then time out, seemingly trying to open server_a.bla:6666/dav.
So long story short…
How does proxy_pass actually work that the first setup works, yet the second breaks? How does a call for localhost:6666 (already behind earlier proxy passes in both cases) somehow end up querying the hostname instead?
And what do I need to configure -or what information/header do I need to pass on- to keep the internal communication intact?


This is the page I landed on that has how you setup nginx https://anubis.techaro.lol/docs/admin/environments/nginx
what the docker compose is, I dont know
And to give you a reference to some of the details glossed over…
The anubis instance listening to a socket doesn’t work as described there. Because the systemd service is running as root by default but your web server would need access to the socket. So you first need to harmonise the user the anubis service runs as with the one from your web server with the permissions of the /run/anubis directory.
(see Discussion here for example)
Also having one single setup example in the docs with unix sockets when that isn’t even the default is strange in the first place…
Half the Environmental Variables are just vaguely describing what they do without actual context. It probably makes perfect sense when you know it all and are writing a description. But as documentation for third-person use that’s not sufficient.
Oh, and the example setup for caddy is nonsensical. It shows you how to route traffic to Anubis and then stops… and references Apache and Nginx setups to get an idea how to continue (read: understand that you then need a second caddy instance to receive the traffic…).
PS: All that criticsm reads harsher than it is meant to be. Good documentation needs user input and multiple view points to realize where the gaps are. That’s simply not going to happen with mostly one person.