ZFS doesn’t require more RAM (or at least not meaningfully more), it just uses it if you have it. The RAM/ARC can be turned down in the configuration if you don’t want it to do that. I think on Linux other filesystems just use the native Linux RAM cache instead(?), so it’s basically the same thing as ARC, just less prominent? Also, doesn’t ZFS have RAIDZ expansion now? Actually a lot of this article smells funny… probably because they just happen to know more about BTRFS. Doesn’t BTRFS still have the RAID5/6 write hole? I wonder what sort of setup they’re using if they’re running it on a NAS.
Yeah btrfs maintainers still recommend against RAID5/6. You can sort of get around that by doing it via lvm and formatting that with btrfs, but I’d rather do it with native file system support. Fewer moving parts, as it were.
My own decision tree for these sorts of things is simple: are all the drives in the array the same? If so, zfs; if not, btrfs. Energy efficiency would come into play for spinning rust or arrays of sufficient size, but the “identical or not” question has served me well for years.
ZFS doesn’t require more RAM (or at least not meaningfully more), it just uses it if you have it. The RAM/ARC can be turned down in the configuration if you don’t want it to do that. I think on Linux other filesystems just use the native Linux RAM cache instead(?), so it’s basically the same thing as ARC, just less prominent? Also, doesn’t ZFS have RAIDZ expansion now? Actually a lot of this article smells funny… probably because they just happen to know more about BTRFS. Doesn’t BTRFS still have the RAID5/6 write hole? I wonder what sort of setup they’re using if they’re running it on a NAS.
Yep, it has for a year now, and it’s surprisingly easy to do.
Can confirm. I upgraded from a 6-bay to an 8-bay enclosure and added two more drives without issue.
Yeah btrfs maintainers still recommend against RAID5/6. You can sort of get around that by doing it via lvm and formatting that with btrfs, but I’d rather do it with native file system support. Fewer moving parts, as it were.
My own decision tree for these sorts of things is simple: are all the drives in the array the same? If so, zfs; if not, btrfs. Energy efficiency would come into play for spinning rust or arrays of sufficient size, but the “identical or not” question has served me well for years.
He didn’t claim it didn’t.
“Even basic tasks like adding drives and changing pool size take a bit of tinkering.”
His claim is it is harder to do.
I was looking at point #3 from the article, which is more misleading in this area than point #5.
Leaving RAM cache to be managed by the kernel has some benefits, especially on low end devices, which is what the article talks about.