I have a PC I have installed Portainer on, with various docker services (home assistant, jellyfin, etc…) with an ISP supplied router fixing various device IP addresses and reaching out to dyndns.
I really want to move everything over to HTTPS connections by supplying certificates, tls termination, etc .
The issue I have is self signed certificates mean I have to manage certificate deployment to everything in the house.
I figure I need to link a domain to the DynDNS entry and arrange certs for the domain. However I can’t make the link function and everywhere wants >£100 to generate a certificate.
How are people solving this issue?
LetsEncrypt provides free certificates. I would setup Nginx Proxy Manager and use DNS challenge with your dyndns provider to get HTTPS on your home services.
My problem - and I’m not alone - is that I really don’t want to expose anything publicly. Is there a way to do this without exposing anything to the Internet?
You don’t have to expose Nginx publicly. It can exist privately on your network. I have my own domain and DNS server internally. For example
nginx.home.datallboy.com
andjellyfin.home.datallboy.com
will resolve to NPM server at192.168.1.10
. Then nginx can listen forjellyfin.home.datallboy.com
, and proxy those connections to my Jellyfin VM at192.168.1.20
.Since I own my domain (
datallboy.com
), I let Nginx Proxy Manager do DNS challenge which is only used to authenticate that I own the domain. This will insert a TXT record on public DNS records for verification, and it can be removed afterwards. LetsEncrypt will then issue a certificate forhttps://jellyfin.home.datallboy.com
which I can only access locally on my network since it only resolves to private IP addresses. The only thing “exposed” is that LetsEncrypt issued a certificate to your domain, which isn’t accessible to the internet anyways.You do not have to create your own CA server.
I have a public domain that I only use internally on my home network. I have a local DNS server that handles all my internal DNS records. So I just point my DNS records to my nginx proxy manager’s local IP address and let it create certs using DNS Challenge. So I don’t need to expose anything external to make it work.
I am new at this, but from my understanding, if you want to not expose anything to internet, you would need to create your own CA server to create your own certificates and have the necessary encryption certs for your own https on your home lab.
That’s essentially what I ended up having to do, but keep hoping that I’ve missed something.
I also find that people seem to ignore this route, assuming people are fine with public dns pointing at your home ip and http/https ports open.
Gotta live on the edge, man. Open up your router. All ports. Firewalls are for pansies. Connect your laptop directly to the modem. Enable
ssh
andrdp
. What could go wrong?
Caddy reverse proxy handles that for me. I just set my domains’ DNS to point to my public IP, where port 80 and 443 are forwarded to a server with Caddy listening.
As far as the money, you could use DuckDNS. It’s free with the certificate. Not wanting to expose your network, I’m not understanding why you would want to use https. You could use wireguard instead.
I use pfsense’s HAProxy integration and a combination of Cloudflare or Lets Encrypt certificates for external stuff. For internal-only stuff I have a root CA I distributed to my computers that I use to sign certificates. My docker box that serves most of my internal stuff has an nginx-proxy-manager container with a wildcard certificate so that I don’t have to sign one for every new subdomain on my docker host, and the various containers with services in it talk to it over a private docker network. Buying a cheap domain and managing it through Cloudflare simplifies a ton of stuff.