Raspberry pi4 Docker:- gluetun(qBit, prowlarr, flaresolverr), tailscale(jellyfin, jellyseerr, mealie), rad/read/sonarr, pi-hole, unbound, portainer, watchtower.

Raspberry pi3 Docker:- pi-hole, unbound, portainer.

  • 1 Post
  • 57 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle

  • Oh, routing, I remember watching an “off site back up” video where they set up IP tables, or IP forwarding, or some such, so when their parents tried to access jellyfin locally it was routed over tailscale. Maybe I’m misremembering though, I’m not confident enough to start thinking about it seriously, so I logged it as “that’s possible” and moved on.

    That way I just have to keep one instance of jellyfin/immich/etc up to date. It’s all a bit beyond my ken currently but it’s the way I’m trying to head. At least until I learn a better way.

    Ideally, I give someone a pi all set up. They plug it in go to service.domain.xyz and it routes to me. Or even IP:Port would be fine, I’ll write them down and stick it to their fridge.

    My parents and I run each others’ off-site back up (tailscale-syncthing), but their photo and media services are independent from mine. I just back up their important data, and they return the favour, but we can’t access or share anything.

    Guides like yours are great for showing what’s possible. I often find myself not knowing what I don’t know so don’t really know where to start learning what I need to learn.


  • What a write up, thank you for documenting this.

    I understand a lot of people in this hobby do it professionally too, so a lot is assumed to be common knowledge us outsiders just don’t have.

    While my system of using tailscale’s magic dns to use lxc:port works fine for my fiancée and I, expanding this a family wide system would prove challenging.

    So this guide is next step. I could send my fiancée to <home.domain.xyz> and it’ll take her to homarr, or <jellyseerr.domain.xyz>

    The ultimate dream would be to give family members a pi zero and a <home.domain.xyz> and then run a family jellyfin/immich.


  • Just because you didn’t see value in the product doesn’t mean others don’t. It saved space for me because I don’t need a slow cooker, rice cooker, pressure cooker, yogurt maker etc. They’re all gone and replaced with a one stop shop of “if it’s wet it goes in the IP”.

    It simplified processes and made them amazingly repeatable too. Stocks are a breeze: set, forget, comeback when it beeps. I don’t nurse temperatures, times and don’t stress things boiling over, boiling dry, getting too hot or not hot enough.

    Sterilisation for brewing: come back when it beeps. Yogurt making: come back when it beeps. Dough fermenting: come back when it beeps. Soup: come back when it beeps. My fiancée wouldnt touch pressure cooking because she’s anxious it will explode, now she comes back when it beeps.

    It doesn’t do anything as well as any dedicated device true enough, but it’s good enough to not buy those things and just use the IP. I’d have to eat a lot of rice to get a rice cooker as well as an IP.






  • A good general suggestion. The WAF I follow are ‘reasonable’ expense, reasonable form factor, and a physical investment. I floated the idea of a VPS and that’s when I learned of the third criteria. It is what it is.

    I just started on this 8tb HDD so it isn’t very full right now, I could raise the ratio limits. But, I worry about filling the HDD and part of me worries about 100s of torrents on an n100 doing other things. So I’m keeping the habit from my pi4+1TB days of deleting media behind us and keeping the torrent count low.

    I justify it as self managing though: popular Isos are on then off my harddrive fairly quickly, but the ones that need me will sit and wait until they hit the ratio of 3 however long that is. I would like to do “3 + (get that last seeder to 100%)” but I don’t know how/if it’s possible to automate through prowlarr.







  • Personally running an Argon Neo on the pi 4, zero complaints. Flirc is better looking by half (imho), but the Neo out performs it thermally (with the cover off, at least the articles I read claimed as much when I was looking).

    I’m running it as a pihole/jellyfin&servarr passively cooled with zero problems.

    Edit, one complaint: I sometimes regret not setting up NVME support, instead I have the OS on a USB SSD. That, a USB HDD, an ethernet cable, and USB keyboard mouse makes the IO a little crowded.


  • I guessed it was a “once bitten twice shy” kind of thing. This is all a hobby to me so the cost-benefit, I think, is vastly different, nothing on my setup is critical. Keeping all those records and up to date on what version everything is on, and when updates are available and what those updates do and… sound like a whole lot of effort when currently my efforts can be better spent in other areas.

    In my arrogance I just installed Watchtower, and accepted it can all come crashing down. When that happens I’ll probably realise it’s not so much effort after all.

    That said I’m currently learning, so if something is going to be breaking my stuff, it’s probably going to be me and not an update. Not to discredit your comment, it was informative and useful.


  • Fedegenerate@lemmynsfw.comtoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    8 months ago

    When I asked this question

    So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.

    • The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
    • Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
    • Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
    • You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).

    I am also new to self hosting, check my bio and post history for a giggle at how new I am, but I have taken advantage of all these points. I do use “latest” though, looking forward to seeing how that burns me later on.

    But to add one more:- my system is robust, in that I can really break my containers (and I do), and to recover is a couple clicks in Portainer. Then I can try again, no harm done.