I’m thinking about getting started using Docker and an older Raspberry Pi. I’m already hosting a grafana service on it, so It can’t be fully dedicated to ha. So curious what everyone is using.
I got a second hand HP Elitedesk mini from ebay. they are small and quite affordable.
I run way to many stuff on it.
Old desktops that the it guy at a previous job showed me how to request
@ohlaph Home Assistant OS on a Dell Micro with an i5-6500T in it and 16 GB of RAM.
Runs extremely well, just slow for ESPHome builds so I don’t use the add-on anymore. Also while TTS is plenty fast I couldn’t use any larger than tiny-int8 or base-int8 for faster-whisper. I offloaded that to my server with my old RTX 2070 in it and have it able to run the turbo model for speech to text.
But no Ollama or similar, fuck using those. I’ve only ever gotten uselessness out of them and I ain’t paying someone else to use theirs to do the same thing just with slightly fewer incidents of “I didn’t find a device called <the thing you said but slightly out of order and now the exact same as it’s actually called>”.
HomeAssistant Yellow
Home Assistant is in a VM in proxmox on a dell micro i5 8th gen. Zeave adapter passed through.
An “old” PC with an i7-4790T and 32 GB RAM.
I have also some Odroid devices based on 32bit-ARM.
But 32bit-ARM has the problem, that meanwhile many container images doesn’t support this architecture anymore.
So, when your Pi is already 64bit-ARM it could be ok.
Otherwise the possible selection regarding available prepared container images may be smaller.Pi4 w/ SSD.
A supermicro 5018A-FNT4 with 16GB RAM. HA takes up about 25%, the rest is influxdb and Grafana.
N100. Cheap enough (last year) and plenty of power to run things like jellyfin on it as well. linuxserver makes great docker images.
A Xeon E5-2650v4 on a Supermicro X10DRL-i and like a million dollars worth (128GB) of DDR4.
I host on a raspberry pi 4 in a Docker container. Ive added an ssd to the pi for longevity!
Custom SFF PC. Ryzen 2700X, Gigabyte B450i, Intel A380, and some WD red plus drives.
In Docker on a Synology DS1522+, works well. I used to use the Synology HA app, but it was always some versions behind, and I was pleasantly surprised that backup and restore was easy to move it to Docker. I’d say that if you change your mind about how you host it in future that it will be fairly easy to change.
I use a dedicated Raspberry Pi (5, previously had on a 4).
I host everything else on a different server, the HA one is dedicated. Pretty nice because then it can run HAOS and basically manages everything itself.
One factor in keeping it separate was I wanted it to be resilient. I don’t want stuff to stop working if I restart my server or if the server dies for some reason. My messing around on my server is isolated from my smart home.
I also have a separate Pi (4, previously on a Pi 1B) that runs Pi-hole, on it’s own Pi for the same reason - if it stops working or even pauses for a moment, the internet stops working.
Yeah I run HAOS in a VM but I keep a backup on an SD card that I can pop into a raspi if for whatever reason the server is down.
People throw a lot of shade at the Pi but I love having dedicated hardware for some more critical projects.
Up until a couple of weeks I was running it on a dedicated Pi4. It’s now running as a VM in ProxMox on a pair of Lenovo M710q mini PCs I got off ebay for £40 each.
I did load them up with RAM, upgrade the CPUs and add a second NIC so they probably came in at more like the cost of a 16Gb Pi5. Each. The RAM was the pricey part. I’ve measured the power usage and they each use about a 3rd more power than the Pi did which I’m happy with. Given that, the added flexibility of running ProxMox and how quiet they are I’m super happy with the setup.
Oh and I used to run PiHole on another Pi. That’s gone now replaced with Technitium DNS running as a pair of VMs too. That was surprisingly easy to do.
It’s now running as a VM in ProxMox on a pair of Lenovo M710q mini PCs
So, have you got High Availability setup? If so, I’d like to know more about that part…
So my plan had been to set up a pair of ProxMox hosts, use Ceph to do the shared storage and use HA so VMs could magically move around if a host died. However, I discovered Ceph and HA need a minimum of 3 hosts. HA can be done if you set up a Pi or some other 3rd host that can act as the 3rd vote in the event of a failure but as I didn’t have Ceph I’ve not bothered trying.
I’ve read Ceph can work on 2 but not well or reliably.
I might setup a 3rd host some day but it seems a bit of a waste as I just don’t need that amount of resources for what I’m running.
And I should have known really, I’ve a bit of a background in VMware, albeit at the enterprise level so I’ve never had to even think about 2 or 3 node clusters.
You can do HA in Proxmox with ZFS replication instead of Ceph. Third device something else as you said. It’s what I’m doing.
Thanks, I’ll look into it.








