Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
KISS
The more complicated the machine the more chances for failure.
Remote management plus bare metal just works, it’s very simple, and you get the maximum out of the hardware.
Depending on your use case that could be very important
Here’s my homelab journey: https://bower.sh/homelab
Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet
This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.
It kinda stuck with me and since then I’ve reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It’s also really convenient to have a web interface to manage the computer
Probably doesn’t work for everyone but it works for me
Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.
But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010’s! These fancy words can’t just mean resource and namespace isolation!
In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That’s different in a good way though, I don’t miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let’s run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob… oh shit, the RAM is on fire.
kubernetes
Kubernetes isn’t just resource isolation, it encourages splitting services across hardware in a cluster. So you’ll get more latency than VMs, but you get to scale the hardware much more easily.
Those terms do mean something, but they’re a lot simpler than execs claim they are.
I love using it at work. Its a great tool to get everything up and running kinda like ansible. Paired with containerization it can make applications more “standard” and easy to spin back up.
That being said, for a home server, it feels like overkill. I dont need my resources spread out so far. I dont want to keep updating my kub and container setup with each new iteration. Its just not fun (to me).
…oh shit, the RAM is on fire.
The RAM. The RAM. The 🐏 is on fire. We don’t need no water let the mothefuxker burn.
Burn mothercucker, burn.
(Thanks phone for the spelling mistakes that I’m leaving).
Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.
Learning this fact is what got me to finally dockerize my setup
Move over, bud. That’s my hill to die on, too.
Speak english doctor! But really is this a fancy way of saying its ok to docker all the things?
“What is stopping you from” <- this is a loaded question.
We’ve been hosting stuff long before docker existed. Docker isn’t necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.
I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I’ve even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where “right” varies on a case-by-case basis.
tl;dr docker is not an absolute necessity and your phrasing makes it seem like it’s the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.
Question is totally on purpose, so that you’ll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!
What is stopping you from running HP-UX for all your workloads? The question is totally in purpose so that you’ll fill in what it means to you.
Honest response - respect.
I’ve been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.
But in this last year I took the time to seriously learn docker/podman, and now I’m never going back to running stuff directly on the host OS.
I love it because I can deploy instantly… Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.
And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it’s now a process that takes minutes. Absolutely beautiful.
Hey, you made my post for me though I’ve been using docker for a few years now. Never, looking, back.
Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.
The only constant is change.
My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.
As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.
Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)
So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.
A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.I feel like this too. I do not feel comfortable using docker containers that I didn’t make myself. And for many people, that defeats the purpose.
Why would I want add overheard and complexity to my system when I don’t need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don’t see a benefit to doing so at home.
Main benefit of Docker for home is Docker compose IMO. Makes it so easy to reuse your configuration
Then check IaC, for example with Terraform or Ansible
Why if I already need to know Docker for work, but not the others
I’ve used Kunernetes but not Ansible lol
All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don’t have to worry about things like a virtual router, using more cpu just to keep the container…contained and running. Plus a VERY tiny system can run:
- Peertube
- GoToSocial + client
- RSS
- search engine
- A number of custom sites
- backups
- Matrix server/client
- and a whole lot more
Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.
I use docker, kub, etc…etc… all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.
Oh so the other 80% of your RAM can sit there and do nothing? My RAM is always around 80% or so as its caching stuff like it’s supposed to.
Hahaha that’s funny. I hope your not serous.
Unused RAM is wasted RAM
Welp OP did ask how we set it up. And for a family instance its good enough. The ram was extra that came with the comp. I have other things to do than optimize my family home server. There’s no latency at all already.
It spikes when peertube videos are uploaded and transcoded + matrix sometimes. Have a good night!
Do you use any tools for management, such as Ansible or similar?
Couple of custom bash scripts for the backups. Ive used ansible at work. Its awesome, but my own stuff doesn’t require any robustness.
what do you run for RSS?
also, I hope you are not doing backups by dding an in use filesystem
Freshrss. Sips resources.
The dd when I want. I have a script I tested a while back. The machine won’t be on yeah. Its just a small image with the software.
Assuming you run Synapse, that uses more than 1.5GB RAM just idling, your system has at the very least 16GB of RAM… Hardly what I’d call “very tiny”
…ok so Im lying about my system for…some reason?
Synapse looks like its using 200M right now. It jumps to 1 GB when being heavily used, but I only use it for piefed and a couple of other local rooms. Honestly its not doing so much for us so we were thinking of getting rid of it. Its irritating to keep having to set up new devices and no one is really using it.
Peertube is much bigger running around 500MB just doing its thing.
Its a single family instance.
# ps -eo user,pid,ppid,cmd,pmem,rss --no-headers --sort=-rss | awk '{if ($2 ~ /^[0-9]+$/ && $6/1024 >= 1) {printf "PID: %s, PPID: %s, Memory consumed (RSS): %.2f MB, Command: ", $2, $3, $6/1024; for (i=4; i<=NF; i++) printf "%s ", $i; printf "\n"}}' PID: 2231, PPID: 1, Memory consumed (RSS): 576.67 MB, Command: peertube 3.6 590508 PID: 2228, PPID: 1, Memory consumed (RSS): 378.87 MB, Command: /var/www/gotosocial/gotosoc 2.3 387964 PID: 2394, PPID: 1, Memory consumed (RSS): 189.16 MB, Command: /var/www/synapse/venv/bin/p 1.1 193704 PID: 678, PPID: 1, Memory consumed (RSS): 52.15 MB, Command: /var/www/synapse/livekit/li 0.3 53404 PID: 1917, PPID: 645, Memory consumed (RSS): 45.59 MB, Command: /var/www/fastapi/venv/bin/p 0.2 46680
Every time I have tried it just introduces a layer of complexity I can’t tolerate. I have struggled to learn everything required to run a simple Debian server. I don’t care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.
However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I’ll be ready to go.
Did you try compose scripts as opposed to
docker run
I don’t know. both? probably? I tried a couple of things here and there. it was plain that bringing in docker would add a layer of obfuscation to my system that I am not equipped to deal with. So I rinsed it from my mind.
If you think it’s likely that I followed some “how to get started with docker” tutorial that had completely wrong information in it, that just demonstrates the point I am making.
Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
If it aint broke, don’t fix it 🤷
pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.
and even that’s overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.
until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.
/uj not really but that’d be sick as hell.
I just imagine what the output of any program would be. Follow me, set yourself free!
I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn’t it be simpler if I could just run
sudo apt install immich vaultwarden
, just like I can dosudo apt install qbittorrent-nox
today? I don’t think there’s anything that prohibits them from running on the same bare metal, actually I think they’d both run as well as in Docker (if not better because of lack of overhead)!Both your examples actually include their own bloat to accomplish the same thing that Docker would. They both bundle the libraries they depend on as part of the build
It’s not just libraries in a docker
True, Docker does it better because any executables also have redundant copies. Running two different node applications on bare metal, they can still disagree about the node version, etc.
The actual old-school bloat-free way to do it is shared libraries of course. And that shit sucks.
Idk about Immich but Vaultwarden is just a Cargo project no? Cargo statically links crates by default but I think can be configured to do dynamic linking too. The Rust ecosystem seems to favour static linking in general just by convention.
Yes, that was my point, you (generally) link statically in Rust because that resolves dependency issues between the different applications you need to run. Cost is a slightly bigger, bloatier binary, but generally it’s a very good tradeoff because a slightly bigger binary isn’t an inconvenience these days.
Docker achieves the same for everything, including dynamically linked projects that default to using shared libraries which can have dependency nightmares, other binaries that are being called, etc. It doesn’t virtualize an entire OS unless you’re using it on MacOS or Windows, so the performance overhead is not as big as people seem to think (disk space overhead, though… can get slightly bigger). It’s also great for dev environments because you can have different devs using whatever the fuck they prefer as their main OS and Docker will make everyone’s environment the same.
I generally wouldn’t put a Rust/Cargo project in docker by default since it’s pretty rare to run into external dependency issues with those, but might still do it for the tooling (docker compose, mainly).
That I’ve yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I’ve two main services in docker, piped and webodm, both because I don’t have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don’t start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it’s in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything “just works”? (Yes, that’s a problem created by container maintainers, but one you can’t escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you’ll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.
Also, I need to host a python2.7 django 2.x or so webapp (yes, I’m rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.
And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.
Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it’s like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with
docker compose logs
, and all config is contained in one directory.It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.
For logs dozzle is also fantastic, and you can do “agents” if you have multiple docker nodes and connect them togetherb
Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you’re only running one server, I definitely see how bare metal is more straight-forward.
This is a big part of why I don’t use VMs or containers at home. All of those abstractions only start showing their worth once you scale them out.
Hm, I don’t know about that either. While scale is their primary purpose, another core tenant of containerization is reproducibility. For example
- If you are developing any sort of software, containers are a great way to ensure that the environment of your builds remains consistent.
- If you are frequently rebuilding a server/application for any reason, containers provide a good way to ensure everything is configured exactly as it was before, and when used with Git, changes are easy to track. There are also other tools that excel at this (like Ansible).
That to me still feels like a variety of “scale”. All of these tools (Ansible is a great example) are of dubious benefit when your scale of systems is small. If you only have a single dev machine or server, having an infrastructure-as-code system or containerized abstraction layer, just feels to me like unnecessary added mental overhead. If this post had been in a community about FOSS development or general programming, I’d feel differently as all of these things can be of great use there. Maybe my idea of selfhosting just isn’t as grandiose as some of the people in here. If you have a room full of server racks in your house, that’s a whole other ballgame.
One main server, with backup servers being very easy to get up and running, either by full-restoring the backup, or installing and restoring specific services. As everything’s backed up to a Hetzner Storage Box, I can always restore it (if I have my USB sticks with the keyfiles).
I don’t really see the need for multiple running hosts, apart from:
- Router
- Workstation which has a 1070 in it, if I need a GPU for something. My 1U server only has space for a low profile and one slot GPU/HPC processor, and one of those would cost way more than its value over my old 1070 would be.
especially once a service does fail or needs any amount of customization.
A failed service gets killed and restarted. It should then work correctly.
If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
So, either build your recovery process to account for this… or fix it so it can recover.
It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.As for customisation, if it isn’t exposed via env vars then it can’t be altered.
If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
It’s using a chisel incorrectly.Exactly. Therefore, docker is not useful for those purposes to me, as using arch packages (or similar) is easier to fulfill my needs.
You can customize and debug pretty easily, I’ve found. You can create your own Dockerfile based on one you’re using and add customizations there, and
exec
will get you into the container.