That is true, but I would define standard practice like this:
ls -l
= ll
And
ls -la
= la
That is true, but I would define standard practice like this:
ls -l
= ll
And
ls -la
= la
IOT Enterprise LTSC fully works for running Windows games. It just doesn’t have a lot of the bloatware. I’ve tried it and I’m dual booting with Arch.
If it is just meant as a steam machine, I recommend looking at Nobara for Nvidia GPU and Bazzite for AMD GPU. I will admit that I haven’t tested vr games yet.
Personally, I’m maining Arch and it plays most games in HDR at 4k 120Hz. My Windows is so I have access to Microsoft Office.
So, I understand this is Ian only, I will leave out NextCloud.
I would personally say Ceph. This is a storage solution meant to be spread among a bunch of different hosts. Basically, it operates on RAID 5 principles AND replicated storage.
Personal setup: single host 12 ea. 10TB HDDs.
To start, it does go ahead and generates the parity data for the storage bucket. On top of that, I am running a X2 replicated bucket. Now since I am running a single host, this data is replicated amongst OSDs(read HDDs), but in a multiple host cluster it would be replicated amongst multiple hosts instead.
One of the benefits to an array like this is that other types of services are easily implemented. NFS overall is pretty good, and it is possible to implement that through the UI or command line. I understand that Samba is not your favorite, but that is also possible. Personally, I am using Rados to connect my Apache Cloudstack hypervisor.
I will admit, it is not the easiest to set up, but using docker containers to manage storage is an interesting concept. On top of that, you can designate different HDDs to different pools, perhaps you want your solid state storage to be shared separately. Ceph is also capable of monitoring your HDDs with smartctl.
Proper installation does give you a web UI to manage it, if some one of your skill even needs it. ;)
Forgive me, but doesn’t yay -S neofetch
do the thing still?
Hypervisor Gotta say, I personally like a rather niche product. I love Apache Cloudstack.
Apache Cloudstack is actually meant for companies providing VMs and K8S clusters to other companies. However, I’ve set it up for myself in my lab accessible only over VPN.
What I like best about it is that it is meant to be deployed via Terraform and cloud init. Since I’m actively pushing myself into that area and seeking a role in DevOps, it fits me quite well.
Standing up a K8S cluster on it is incredibly easy. Basically it is all done with cloud init, though that process is quite automated. In fact, it took me 15m to stand up a 25 node cluster with 5 control nodes and 20 worker nodes.
Let’s compare it to other hypervisors though. Well, Cloudstack is meant to handle global operations. Typically, Cloudstack is split into regions, then into zones, then into pods, then into clusters, and finally into hosts. Let’s just say that it gets very very large if you need it to. Only it’s free. Basically, if you have your own hardware, it is more similar to Azure or AWS, then to VMWare. And none of that even costs any licensing.
Technically speaking, Cloudstack Management is capable of handling a number of different hypervisors if you would like it to. I believe that includes VMWare, KVM, Hyperv, Ovm, lxc, and XenServer. I think it is interesting because even if you choose to use another hypervisor that you prefer, it will still work. This is mostly meant as a transition to KVM, but should still work though I haven’t tested it.
I have however tested it with Ceph for storage and it does work. Perhaps doing that is slightly more annoying than with proxmox. But you can actually create a number of different types of storage if you wanted to take the cloud provider route, HDD vs SSD.
Overall, I like it because it works well for IaaS. I have 2000 vlans primed for use with its virtual networking. I have 1 host currently joined, but a second host in line for setup.
Here is the article I used to get it initially setup, though I will admit that I personally used a different vlan for the management ip and the public ip vlan. http://rohityadav.cloud/blog/cloudstack-kvm/
Here is a more or less automated install for root on zfs. Need at least three hdds preferably in an hba and can withstand the loss of at least one drive.
https://github.com/Reddimes/ubuntu-zfsraid10/tree/debian-raidz1
There is also an Ubuntu ZFS RAID 10 branch.
I gotta give LibreElec a 👍. If your tv can do HDMI control, then using the remote to control Kodi works by default.
Jellyfish plugin works incredibly easily too. YouTube, not so much. Google API is not up to spec.
I would like to put in something that I over heard about the pi5, it apparently can run Android tv. Which would solve all your requirements.
Heads up, this is going to be an incredibly detailed comment, sorry. So, at the time I stood up that cluster, it was not in Ceph. I had setup the host to run Ubuntu 24.04 with Root on ZFS and the host was simply connected to itself via NFS.
Here is the Github I created for the Root on ZFS installation, I’m not sure if you are familiar with ZFS but it is an incredibly feature rich filesystem. Similar to BTRFS, you can take snapshots of the server so basically if your host goes down you have a backup at least. On top of that, you get L2ARC caching. Basically, any time it reads or writes to my zpool, that is handled in the background by my NVMe SSD. It also caches the most frequently used files so that it doesn’t have to read from a HDD everytime. I will admit that ZFS does use a lot of memory, but the L2ARC kinda saved me from that on this server.
Ultimately that cluster was not connected to CEPH, but simply NFS. Still, I created a Github repository which is basically just one command to get Ubuntu 24.04 installed with Root on ZFS. https://github.com/Reddimes/ubuntu-zfsraid10. Its not prefect, if it seems like it is frozen, just hit enter a couple times, I don’t know where it is getting hung up and I’m too lazy to figure it out. After that, I followed this guide for turning it into a Cloudstack Host: https://rohityadav.cloud/blog/cloudstack-kvm/.
That was my initial setup. But now I have it setup significantly differently. I rebuilt my host, installed Ubuntu 24.04 to my NVMe drive this time. Did some fairly basic setup with Cephadm to deploy the osds. After the OSD’s were deployed, I followed this guide for getting it setup with cloudstack: https://www.shapeblue.com/ceph-and-cloudstack-part-1/. The only other issue is that you do need a secondary storage server as well. I’ve personally decided to use NFS for that similar to my original setup. Now Ceph does use a LOT of memory. It is currently the only thing running on my host and I’ve attached a screenshot.
77GB!!! OoooWeee… A bit high. Admittedly, this is likely because I am not running just the Rados image store, but also an *arr stack in cephfs on it. And though I have 12 HDDS, some of them have smart poweron time exceeding 7 years. So ignore the scrubbing, please.
I do potentially see some issues, with ceph, the data is supposed to be redundant, but I’ve only provided one ip for it for the moment until I figure out the issues I’m having with my other server. That is some exploration that I’ve not done yet.
Finally takes a breath Anyways, the reason I choose Cloudstack was to delve into the DevOps space a little bit except home built and self-hosted. It is meant to be quite large, and be used by actual cloud providers. In fact, it is meant to have actual public IP addresses which get assigned to the Centos Firewalls that it creates for each network. In a homelab, I had to get a little creative and setup a “public” network on a vlan controlled by my hardware firewall. This does mean that if I actually want something to be public that I need to actually forward it from my hardware firewall, but otherwise, no issue. Going back to the DevOps learning path, not only can you set up linux servers with cloud-init user data, but Terraform works by default and it acts quite similar to Terraform and AWS.
The thing that is interesting about K8S deployments is that it is just the click of a single button. Sure, first you have to download the iso, or build your own with the built-in script, but Cloudstack manipulates the cloud-init user data of each node in the cluster to set it up automatically whether it is a control node, or a worker node. After that, you do need to update the virtual machines running it. I’m sure there is a proper way to use Ansible, but I’ve run into a couple of issue with it and did it manually via ssh.
Edit: Yes, those nodes were all VMs.
Yeah my storage was beefed up at the time. Zfs raid 10. But I’ve changed to using ceph for shared redundant storage.
I’m a little curious what you are using for a hypervisor. I’m using Apache Cloudstack. Apache Cloudstack had a lot of the same features as AWS and Azure. Basically, I have 1000 vlans prepared to stand up virtual networking. Cloudstack uses Centos to stand up virtual firewalls for the ones in use. These firewalls not only handle firewall rules, but can also do load balancing which I use for k8s. You can also make the networks HA by just checking a box when you stand it up. This runs a second firewall that only kicks in if the main one stops responding. The very reason I used Cloudstack was because of how easy it is to setup a k8s cluster. Biggest cluster I’ve stood up is 2 control nodes and 25 worker nodes, it took 12 minutes to deploy.
Yeah, those data questions are really loaded. I don’t host for privacy or what not. It’s because of a learning objective, to study, experiment, and run automated stock trading algorithms. I don’t exactly have anything to hide from private companies.
Then what are your hands doing under the table?
…
WHAT ARE YOUR HANDS DOING UNDER THE TABLE??!!
I would argue for Apache Cloudstack personally.
Though I have used and like Proxmox as well.
What this cat said.
Interesting. One other option is to use OrangePi for the server. OrangePi has ARC over HDMI and that would count as an input.
I did choose the WiSa surround sound system linked. I’ll cannibalize it later to make better speakers. I like it because it is audio at 24 bit/96kHz. It also just uses the HDMI ARC.
Radio signal(I’m a comm/nav aircraft mechanic, I had to know):
Not if you need custom error bars on a scatter plot in Excel.