• 2 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: March 26th, 2024

help-circle


  • Heads up, this is going to be an incredibly detailed comment, sorry. So, at the time I stood up that cluster, it was not in Ceph. I had setup the host to run Ubuntu 24.04 with Root on ZFS and the host was simply connected to itself via NFS.

    Here is the Github I created for the Root on ZFS installation, I’m not sure if you are familiar with ZFS but it is an incredibly feature rich filesystem. Similar to BTRFS, you can take snapshots of the server so basically if your host goes down you have a backup at least. On top of that, you get L2ARC caching. Basically, any time it reads or writes to my zpool, that is handled in the background by my NVMe SSD. It also caches the most frequently used files so that it doesn’t have to read from a HDD everytime. I will admit that ZFS does use a lot of memory, but the L2ARC kinda saved me from that on this server.

    Ultimately that cluster was not connected to CEPH, but simply NFS. Still, I created a Github repository which is basically just one command to get Ubuntu 24.04 installed with Root on ZFS. https://github.com/Reddimes/ubuntu-zfsraid10. Its not prefect, if it seems like it is frozen, just hit enter a couple times, I don’t know where it is getting hung up and I’m too lazy to figure it out. After that, I followed this guide for turning it into a Cloudstack Host: https://rohityadav.cloud/blog/cloudstack-kvm/.

    That was my initial setup. But now I have it setup significantly differently. I rebuilt my host, installed Ubuntu 24.04 to my NVMe drive this time. Did some fairly basic setup with Cephadm to deploy the osds. After the OSD’s were deployed, I followed this guide for getting it setup with cloudstack: https://www.shapeblue.com/ceph-and-cloudstack-part-1/. The only other issue is that you do need a secondary storage server as well. I’ve personally decided to use NFS for that similar to my original setup. Now Ceph does use a LOT of memory. It is currently the only thing running on my host and I’ve attached a screenshot. 77GB!!! OoooWeee… A bit high. Admittedly, this is likely because I am not running just the Rados image store, but also an *arr stack in cephfs on it. And though I have 12 HDDS, some of them have smart poweron time exceeding 7 years. So ignore the scrubbing, please.

    I do potentially see some issues, with ceph, the data is supposed to be redundant, but I’ve only provided one ip for it for the moment until I figure out the issues I’m having with my other server. That is some exploration that I’ve not done yet.

    Finally takes a breath Anyways, the reason I choose Cloudstack was to delve into the DevOps space a little bit except home built and self-hosted. It is meant to be quite large, and be used by actual cloud providers. In fact, it is meant to have actual public IP addresses which get assigned to the Centos Firewalls that it creates for each network. In a homelab, I had to get a little creative and setup a “public” network on a vlan controlled by my hardware firewall. This does mean that if I actually want something to be public that I need to actually forward it from my hardware firewall, but otherwise, no issue. Going back to the DevOps learning path, not only can you set up linux servers with cloud-init user data, but Terraform works by default and it acts quite similar to Terraform and AWS.

    The thing that is interesting about K8S deployments is that it is just the click of a single button. Sure, first you have to download the iso, or build your own with the built-in script, but Cloudstack manipulates the cloud-init user data of each node in the cluster to set it up automatically whether it is a control node, or a worker node. After that, you do need to update the virtual machines running it. I’m sure there is a proper way to use Ansible, but I’ve run into a couple of issue with it and did it manually via ssh.

    Edit: Yes, those nodes were all VMs.



  • I’m a little curious what you are using for a hypervisor. I’m using Apache Cloudstack. Apache Cloudstack had a lot of the same features as AWS and Azure. Basically, I have 1000 vlans prepared to stand up virtual networking. Cloudstack uses Centos to stand up virtual firewalls for the ones in use. These firewalls not only handle firewall rules, but can also do load balancing which I use for k8s. You can also make the networks HA by just checking a box when you stand it up. This runs a second firewall that only kicks in if the main one stops responding. The very reason I used Cloudstack was because of how easy it is to setup a k8s cluster. Biggest cluster I’ve stood up is 2 control nodes and 25 worker nodes, it took 12 minutes to deploy.







  • Interesting. One other option is to use OrangePi for the server. OrangePi has ARC over HDMI and that would count as an input.

    I did choose the WiSa surround sound system linked. I’ll cannibalize it later to make better speakers. I like it because it is audio at 24 bit/96kHz. It also just uses the HDMI ARC.

    Radio signal(I’m a comm/nav aircraft mechanic, I had to know):

    • 5 GHz spectrum
    • Fixed latency of 2.6 ms





  • I’m curious where you are from and what hardware for self hosting you have. I also want to know what you are interested in self-hosting or learning.

    For me, my home lab started with networking. Yours doesn’t have to. For me, I had already achieved System Administration and was working to become a network engineer. Where are you on your path? In truth, starting with the network is not the best, mine required dedicated equipment: a firewall(UDM), switching(ubiquiti), and access points. This is expensive, so perhaps not the best place to stay.

    I would say that a good place to stay is with virtualization and a hypervisor. A hypervisor is intended to run virtual machines. I think starting with a hypervisor is a good idea because once you have a hypervisor, you can experiment with just about anything you want. Windows, Linux, docker, wherever your exploration takes you.

    Now, I would say the cheapest way to do this kinda depends on you. Do you have a .edu email address? If so, you should be able to receive free licensing for Windows Server through Microsoft imagine (previously called dreamspark). If not, do you have Windows 10/11 pro edition? I would say that Windows server may require dedicated hardware, but if you are already running Windows pro, then your daily driver pc will be capable of running hyper-v.

    If you have an old spare computer, you can make it a dedicated hypervisor with either the Windows Server option, or in my opinion the preferable Proxmox. Proxmox may take a little time to get acclimated to since it is Linux command line, but you already have experience with that on the pihole.

    Those are my recommended next steps to take. Though, there is plenty more that you can do. As others have said docker is a cool way to make some of this happen. I personally hate docker on Windows(it’s weird and I just want the command line not a UI). But you should easily be able to spin up Windows Subsystem for Linux, install docker and docker compose and get started there without needing any additional hardware. You could also do the same using hyper-v if you prefer and have a pro license.

    Regardless of what direction you choose to go, you can go far, you can succeed, and you can thrive. And if you run into any issues, post them here. Selfhosted has your back, and we are all rooting for you.

    Side Note: Hyper-v used to only be available on Windows Pro, but if someone knows for sure that it is available on home please let me know and I will update my post.



  • oOooo… Quite interesting.

    If you are intending to use it, I have some thoughts about the way that you should get it setup and running.

    First thing I would look into is getting the iDrac reset and working. iDrac is intended to allow you to view the display of the server without connecting a monitor, simply use a web page. It also allows you to power on/off the server remotely even if it is frozen or off. It is a simple web interface that allows you to control it.

    After that, I have some questions about your intention for this server. If you are intending to use this server as a hypervisor, I would like to take just a moment to shill for Apache Cloudstack. I recently setup a server running this and it is going absolutely wonderfully. The reason I chose to use it is it is more open to DevOps workloads, by default compatible with Terraform and takes literally 5 minutes to setup an entire Kubernetes cluster. However, the networking behind it is a bit more advanced and if you want more detail just ask me. For now, suffice it to say that it is capable of running 201 vlans protected by virtual routers.

    If that is too much to bite off for a hypervisor at one time, then Proxmox is the way to go. You can probably see a few videos from Linus Tech Tips involving that software. It has much simpler networking and can get you up and running in no time.

    Finally, if you are intending to learn something a little more professionally viable, then I would talk to your boss about utilizing an unused VMWare license or perhaps working with Hyper-V(my least favorite option).

    If you do intend a Hypervisor, then I would highly recommend setting up a raid. Now, the type of RAID depends highly on what you want. RAID 5 will probably work for a homelab, but I would still recommend a RAID 10. RAID 5 gives you more storage space, but I like the performance benefits of a RAID 10. I think that it is very important when multiple virtual devices are sharing the same storage. You can read more about the various RAID levels here: https://www.prepressure.com/library/technology/raid