• 32 Posts
  • 167 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle



  • I’m not familiar with Nextcloud, but from reading the How to use this? section of the README I believe you can run it behind a reverse proxy:

    --publish 80:80 This means that port 80 of the container should get published on the host using port 80. It is used for getting valid certificates for the AIO interface if you want to use port 8443. It is not needed if you run AIO behind a web server or reverse proxy and can get removed in that case as you can simply use port 8080 for the AIO interface then.

    (Emphasis mine, in “Explanation of the command”)

    My understanding is you only have to forward traffic from the reverse proxy to the port 8080. It uses a self-signed certificate though, so you might check if the reverse proxy you are using checks certificates signatures for upstream servers.


  • It is possible, what you’re looking for is a reverse proxy: it’s an HTTP server that will listen to the standard ports for HTTP and HTTPS that will redirect traffic to the chosen service based on the domain name or URL.

    In your case, every subdomain would point to your VPS’s IP and traffic that’s for mastodon.example.tld will be seemlesly proxied to your Mastodon container.

    Do some research on Caddy or Nginx, and I strongly recommend you learn Docker Compose and Docker networking, it will help you make it easier to maintain everything.

    PS: CNAME pointing to A record is the way to go. You can do it one better by having a CNAME entry for *.example.tld, so that you don’t have to create a CNAME entry for every new service you deploy, but you better make sure that your reverse proxy won’t proxy requests to an unexpected container when requesting a bogus subdomain.





















  • There are multiple causes to its demise.

    The big one was security (or lack thereof) as attackers would abuse plug-ins through NPAPI. I remember a time when every month had new 0-days exploiting a vulnerability in Flash.

    The second one in my opinion, is the desire to standardize features in the browser. For example, reading DRM-protected content required Silverlight, which wasn’t supported on Linux. Most interactive games and some websites required Flash which had terrible performance issues. So it felt natural to provide these features directly in the browser without lock-in.

    Which leads to your second question: I don’t think we will ever see the return to NPAPI or something similar. The browser ecosystem is vibrant and the W3C is keen to standardize newly needed features. The first example that comes to mind is WebAuthn: it has been integrated directly in the browsers when 10 years ago it would have been supported through NPAPI.