• TimeSquirrel@kbin.social
    link
    fedilink
    arrow-up
    43
    ·
    7 months ago

    Is this the equivalent of a PC maker in 2024 going “yeah, I don’t think we are going to put a floppy drive in anymore…”?

    • homura1650@lemm.ee
      link
      fedilink
      arrow-up
      62
      ·
      7 months ago

      No. It is the equivalent of a PC maker going “yeah. I don’t think we are going to put in a CD drive anymore because the DVD drive we have been including for years can do CDs as well”

      • LeFantome@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        That is a great analogy.

        Linux can support ext2 two ways today: explicitly and as a side effect of ext4 support. All this change does is remove the explicit support.

        We can remove the explicit CD support provided by a dedicated drive because the DVD drive will provide it as a side-effect.

      • Markaos@lemmy.one
        link
        fedilink
        arrow-up
        22
        ·
        7 months ago

        I don’t think that’s a similar situation - the Linux kernel lost some functionality there, but in this case Ext2 filesystems are still fully supported by the Ext4 driver, so there’s no difference in “hardware” support.

        The separate Ext2 driver was being kept for embedded devices with extreme memory or storage limitations where saving some kilobytes by not having all the new Ext3/4 features was useful, but when you can afford the extra memory, there’s no reason not to just use the Ext4 driver for all Ext2/3/4 filesystems.

    • lemmyreader@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      18
      ·
      7 months ago

      That reminds me that some howtos I’ve seen in the past recommended to use ext2 for a separate /boot partition.

      • NaN@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        7 months ago

        Now we use fat on the ESP. Ext2 for boot was pretty common in the past, journaling wasn’t really needed and it was going to work with whichever bootloader you used. At the time your other partitions might use who knows what and bootloader support for that filesystem wasn’t guaranteed.

        • toastal@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          Depends how you read that. FAT32 is basically required for /boot/EFI but you still see /boot as separate old, stable filesystem on some setups. Usually it is just a bit easier/less hassle to do the whole thing up as FAT32 but you don’t have to.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 months ago

        Perhaps for LILO compatibility? But that would make it a pretty old howto (10 years or more).

    • nyan@sh.itjust.works
      link
      fedilink
      arrow-up
      18
      arrow-down
      1
      ·
      7 months ago

      If I recall correctly, ext3 is ext2 with journalling on top, so they can’t really get rid of ext2 without also ditching ext3.

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 months ago

        I love ZFS but support for it on Ubuntu seems haphazard. It works fine for non-root drives.

        I’ve tried running it as my root partition and just gave up after it fucked up my bpool dataset too many times.

          • CosmicTurtle0@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 months ago

            Yup. It booted fine but after a few reboots, bpool somehow got corrupted and refused to boot. It happened repeatedly after several reinstalls.

            • qprimed@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              ZFS hits memory hard and sometimes can bring out latent deficiencies in that hardware. on non-optimal hardware its a bit of a hardware torture test in its own right.

              having said that, EXT4 and XFS are wonderful unless you need zfs/btrfs.

            • Avid Amoeba@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              7 months ago

              Yeah, the current implementation from the installer never got beyond the experimental stage when it was first introduced. I saw there’s a new “guided setup” in the 24.04 release notes. No idea what it entails yet. I think I’ve also seen a page for setting it up for / in OpenZFS’es docs. I might try it at some point.

      • subtext@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        7 months ago

        Same here lol, just read through ext{2…4} as well as Btrfs and Bcachefs (and B Trees of course). What a wonderful unplanned deep dive.

      • exscape@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        7 months ago

        ZFS is really nice. I started experimenting with it when it was being introduced to FreeBSD, around 2007-2008, but only truly started using it last year, for two NASes (on Linux).

        It’s complex for a filesystem, but considering all it can do, that’s not surprising.

        • Peasley@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          Not recommended for single-disk root partitions. This is a mistake I’ve made myself. Recovery tools are non-existant on ZFS so non-parity setups are inherently risky. If you have root setup on at least raidz1 with at least 2 disks you are fine.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            7 months ago

            Personally I wouldn’t consider recovery as an option at all because it could easily be unavailable because the SSD failed. Instead, I tend to add a mirror drive and/or keep frequent backups where that’s not possible. So from that perspective ZFS is equivalent to Ext4, which I currently use. I’d prefer ZFS over it for it’s data verification, snapshotting and datasets features.

            • Peasley@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              7 months ago

              I’ve successfully recovered data from ext4 on a broken drive on one occasion. I agree it would have been better to have backups so lesson learned I suppose. Still if I’d been on ZFS root with no mirror I’d have been even more SOL

    • ozymandias117@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      The ext4 driver can read ext2/ext3 partitions while supporting the 2038 time issue

      The only change here is the driver loading the filesystem

      Ext3 support is already only available through the ext4 driver

  • LeFantome@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    You will still be able to mount an ext2 file system with the kernel until ext4 support is removed. That is still going to be a long, long time.

    • KISSmyOSFeddit@lemmy.world
      link
      fedilink
      arrow-up
      25
      arrow-down
      1
      ·
      7 months ago

      Sure, as soon as there’s a stable replacement available.
      I wouldn’t put my mission-critical file server on BTRFS.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        7
        ·
        7 months ago

        I wouldn’t put my mission-critical file server on BTRFS.

        Oh, but I and a lot of people do and it is way more reliable than ext* filesystems ever were. Maybe ZFS or XFS is more your style then? Ext4 is very, very prone to total failure and complete data loss at the slightest hardware issue. I’m not saying you should rely on any filesystem ever, backups are important and should be there, the thing it that recovering from backups takes time and the amount of recovery that ext forced me into over the years isn’t just acceptable.

        • pimeys@lemmy.nauk.io
          link
          fedilink
          arrow-up
          26
          arrow-down
          1
          ·
          7 months ago

          ZFS is still the de-facto standard of a reliable filesystem. It’s super stable, and annoyingly strict on what you can do with it. Their Raid5 and Raid6 support are the only available software raids in those levels that are guaranteed to not eat your data. I’ve run a TrueNAS server with Raid6 for years now, with absolutely no issues and tens of terabytes of data.

          But, these copy on write filesystems such as ZFS or btrfs are not great for all purposes. For example running a Postgres server on any CoW filesystem will require a lot of tweaking to get reasonable speeds with the database. It’s doable, but it’s a lot of settings to change.

          About the code quality of Linux filesystems, Kent Overstreet, the author of the next new CoW filesystem bcachefs, has a good write-up of the ups and downs:

          • ext4, which works - mostly - but is showing its age. The codebase terrifies most filesystem developers who have had to work on it, and heavy users still run into terrifying performance and data corruption bugs with frightening regularity. The general opinion of filesystem developers is that it’s a miracle it works as well as it does, and ext4’s best feature is its fsck (which does indeed work miracles).
          • xfs, which is reliable and robust but still fundamentally a classical design - it’s designed around update in place, not copy on write (COW). As someone who’s both read and written quite a bit of filesystem code, the xfs developers (and Dave Chinner in particular) routinely impress me with just how rigorous their code is - the quality of the xfs code is genuinely head and shoulders above any other upstream filesystem. Unfortunately, there is a long list of very desirable features that are not really possible in a non COW filesystem, and it is generally recognized that xfs will not be the vehicle for those features.
          • btrfs, which was supposed to be Linux’s next generation COW filesystem - Linux’s answer to zfs. Unfortunately, too much code was written too quickly without focusing on getting the core design correct first, and now it has too many design mistakes baked into the on disk format and an enormous, messy codebase - bigger that xfs. It’s taken far too long to stabilize as well - poisoning the well for future filesystems because too many people were burned on btrfs, repeatedly (e.g. Fedora’s tried to switch to btrfs multiple times and had to switch at the last minute, and server vendors who years ago hoped to one day roll out btrfs are now quietly migrating to xfs instead).
          • zfs, to which we all owe a debt for showing us what could be done in a COW filesystem, but is never going to be a first class citizen on Linux. Also, they made certain design compromises that I can’t fault them for - but it’s possible to better. (Primarily, zfs is block based, not extent based, whereas all other modern filesystems have been extent based for years: the reason they did this is that extents plus snapshots are really hard).

          I started evaluating bcachefs in my main workstation when it arrived to the stable kernels. It can do pretty good raid-1 with encryption and compression. This combination is not really available integrated to the filesystem in anywhere else but zfs. And zfs doesn’t work with all the kernels, which prevents updating to the latest and greatest. It is already a pretty usable system, and in a few years will probably take the crown as the default filesystem in mainstream distros.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            7 months ago

            ZFS is still the de-facto standard of a reliable filesystem. It’s super stable, and annoyingly strict on what you can do with it.

            Yes and that’s the reason why I usually pick BTRFS for less complex things.

            • pimeys@lemmy.nauk.io
              link
              fedilink
              arrow-up
              5
              arrow-down
              1
              ·
              7 months ago

              Yeah. I would not for example install ZFS to a laptop. It’s just not great there, and it doesn’t like things such as sudden power failure, and it uses kind of a lot of memory…

              • TCB13@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                7 months ago

                Meanwhile BTRFS provides me with snapshots and rollbacks that are a useful when I’m messing with the system. And subvolumes bring a lot of flexibility for containers and general management.

                • pimeys@lemmy.nauk.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  7 months ago

                  For sure. I would say if you run a distro like Arch, using it without cow filesystem and snapshots is not a good idea… You can even integrate snapshots with pacman and bootloader.

                  I’ve been running nixos for so long, that I don’t really need snapshots. You can always boot to the previous state if needed.

                  If you write software and run tests against a database, I’d avoid having the docker volumes on btrfs pool. The performance is not great.

        • JetpackJackson@feddit.de
          link
          fedilink
          arrow-up
          11
          ·
          7 months ago

          Do you have a source for the ext4 failure stuff? I use ext4 currently and want to see if there’s something I need to do now other than frequent backups

          • atzanteol@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            22
            ·
            edit-2
            7 months ago

            They don’t. ext4 has been the primary production filesystem for over 15 years. And it’s basically modified ext3 so it’s been around even longer as a format.

            It’s very stable. It’s still the default for many distros even.

          • kurushimi@lemmyonline.com
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            7 months ago

            I used ext4 extensively in an HPC setting a few jobs ago (many petabytes). Some of the server clusters were in areas with very unreliable power grids like Indonesia. Using fsck.ext4 had become our bread and butter, but it was also nerve wracking because in the worst failures that involved power loss or failed RAID cards, we sometimes didn’t get clean fscks. Most often this resulted in loss of file metadata which was a pain to try to recover from. To its credit, as another quote in this thread mentioned, fsck.ext4 has a very high success rate, but honestly you shouldn’t need to manually intervene as a filesystem admin in an ideal world. That’s the sort of thing next gen filesystem attempt to provide.

          • sep@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            7 months ago

            Not seen a fs corruption yet. But i have only run ext4 on around 350 production servers since 2010 ish.
            Have ofcourse seen plenty of hardware failures. But if a disk is doing the clicky, it is not another filesystem that saves you.

            Have regularly tested backups!

          • nyan@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            ·
            7 months ago

            ext4 is still solid for most use cases (I also use it). It’s not innovative, and possibly not as performant as the newer file systems, but if you’re okay with that there’s nothing wrong with using it. I intend to look into xfs and btrfs the next time I spin up a new drive or a new machine, but there’s no hurry (and I may not switch even then).

            There’s an unfortunate tendency for people who like to have the newest and greatest software to assume that the old code their new-shiny is supposed to replace is broken. That’s seldom actually the case: if the old software has been performing correctly all this time, it’s usually still good for its original use case and within the scope of its original limitations and environment. It only becomes truly broken when the appropriate environment can’t be easily reproduced or one of the limitations becomes a significant security hole.

            That doesn’t mean that shiny new software with new features is bad, or that there isn’t some old software that has never quite performed properly, just that if it ain’t broke, it’s okay to set a conservative upgrade schedule.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Well a few years ago I actually did some research into that but didn’t find much about it. What I said was my personal experience but now we also have companies like Synology pushing BRTFS for home and business customers and they have analytics on that for sure… since they’re trying to move everything…

        • lemmyreader@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          I used to associate btrfs with the word unreliable for years based on what I’ve read here and there, while ext4 appears to be rock solid. Pointing to sources for this is not easy though. Here’s a start.

          See Features and Caveats here for Btrfs : https://wiki.gentoo.org/wiki/Btrfs#Features

          For Ext4 https://wiki.gentoo.org/wiki/Ext4

          ext4 (fourth extended file system) is an open source disk filesystem and most recent version of the extended series of filesystems. It is the primary file system in use by many Linux systems rendering it to be arguably the most stable and well tested file system supported in Linux.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            edit-2
            7 months ago

            The “Caveats” section for BTRFS is trash, it is all about a ENOSPC issue that requires you to low level mess with the thing or run the fs for years over constant writes without any kind maintenance (with automatic defragmentation explicitly disabled). Frankly I can point from the top of my head real issues they aren’t speaking about: RAID56 (everything?), RAID10 (improve reading performance with more parallelization).

            If we take subvolumes, snapshots, deduplication, CoW, checksums and compression in consideration then there’s no reason to ever use ext4 as it is just… archaic. Synology is pushing for BRTFS at home and business so they must have analytics backing that as well.