• 0 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle
  • It’s tricky to make a recommendation as pretty much all the home lab stuff that people typically run can be done so on a potato, which is why RPis are so popular.

    An N100 would definitely be a step up from the pis and meet your stated needs. They are super popular, in a multitude of formfactors so should be able to find something you like. But you may get the itch to upgrade further if you expect to expand or experiment extensively. Like any hobby, it’s generally easy to justify to yourself that you need to get that “next cool/better/faster/prettier thing” so such an itch may be unavoidable no matter what you get.

    Instead of worrying about performance, as pretty much any modern miniPC should outclass a Pi, take a look at the specific form factors that are available. Do they have the expansion, networking you need? Can you stick this thing somewhere out of the way and not worry about it taking too much space or making too much noise? Are you comfortable with their level of support/warranty? Expect garbage/non-existent support from most of the miniPC specialty brands out there, which includes minisforum which I recommended in another comment. If you outgrow it, are you comfortable with it being e-waste/have a means of repurposing it?


  • While N100 is great for what it is, especially at a $200 budget, it can be limiting with its fairly small core/thread count if you expand beyond a handful of applications.

    OP mentioned tinkering with multiple Linux flavors. A higher end cpu, with more cores and threads, would allow them to virtualize multiple instances on top of whatever other workloads they have and potentially not break a sweat while the N100 could struggle. While such an upgrade would be more expensive, price for performance will likely be significantly better if you can make use of it.


  • I’ve had good experience with the Minisforum MS-01, while it’s more than your $200 mentioned, it’s been worth every penny. Plenty of power for most homelabs and lots of nice features for future proofing (10gb, Ethernet, plenty of storage options, small but still usable pcie expansion slot) in a small form factor.

    I’ve pretty much retired all my RPis at this point and my old Synology NAS is now just storage only with the MS-01 doing all the actual work.

    Really don’t have a reason to migrate away from it for many years unless it died. Even then, you can create a promox cluster with them trivially to provide some redundancy.

    They also have the a1 and a2 options for AMD but the a1 doesn’t have the same feature set and a2 is pretty expensive if you don’t need the extra power.









  • It’s cheaper if you don’t have constant load as you are only paying for resources you are actively using. Once you have constant load, you are paying a premium for flexibility you don’t need.

    For example, I did a cost estimate of porting one of our high volume, high compute services to an event-driven, serverless architecture and it would be literally millions of dollars a month vs $10,000s a month rolling our own solution with EC2 or ECS instances.

    Of course, self hosting in our own data center is even cheaper, where we can buy and run new hardware that we can run for years for a fraction of the cost of even the most cost-effective cloud solutions, as long as you have the people to maintain it.


  • I have, and use Calibre with LL instead and it still requires a lot of hand holding and manual grooming to get a clean library.

    My big issue with Readarr is that it had a hard time fetching data for various popular and/or prolific authors. So if I wanted to fetch all the books for a particular author, there was a high likelihood it wouldn’t actually fetch the necessary book data to do so.


  • brandon@lemmy.worldtoSelfhosted@lemmy.worldJellyseer for ebooks?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    2 years ago

    I prefer LazyLibrarian over Readarr but it still leaves a lot to be desired for end-user usability. One of the big issues with ebooks is that data is a mess, with each book having a billion different editions with spotty metadata support that makes it hard to tell what is what.

    Goodreads seems like it was a decent source of data for these types of projects but they shut off new API access a couple years ago and legacy access can go away at any moment. Hardcover seems like a promising API alternative but not sure if anyone has started integrating with them yet. Manga and comics seem to be in a better state, with a more rabid fanbase maintaining data but still nowhere near what’s available for movies and tv.




  • Fox kept getting into a loop of making films in order to maintain the rights, which invariable get rushed and subsequently bomb. No one wants to be associated with the pre-existing trash and so, they need to do a reboot and start fresh. The rights became far more valuable than the films over time, as Marvel went from near bankruptcy (who sold rights for basically nothing) to a multibillion dollar brand. Eventually Fantastic Four, along with x-men, basically just became a bargaining chip to extract as much money as possibly when they were eventual bought out by Disney.

    Now the rights are in Marvel/Disney’s hands, it shouldn’t need to go through this ever looping cycle of trash every few years.


  • Unlikely to be feasible for gaming as you will run in latency and overhead issues. If you want 60 fps, you have 16-17ms to render each frame.

    At the bare minimum, you are probably going to lose a couple of ms from network latency from even the best home networking setups.

    Then there is the extra overhead of maintaining state in realtime between multiple systems as well as coordinating what work each system can actually do in parallel. Full set of textures and other data will most certainly need to be on both, as having a shared memory pool across the network would be unfeasible. As a result, you will most likely have the same memory constraints, especially on the gpu, for each machine as you would just using a single machine.