It’s becoming easy to see why Linus didn’t merge anything from bcachefs for 6.17. And Kent isn’t gaining himself any supporters by tearing down other filesystems in his tantrum.

  • moonpiedumplings@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    18 hours ago

    I’ll say it again and again. The problem is neither Linus, nor Kent, but the lack of resources for independent developers to do the kind of testing that is expected of the big corporations.

    Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…

    But the big corpos are different. They have these massive CI/CD systems, which automatically build and test Linux on every architecture under the sun. Then they have an extra, internal review process for these patches. And then they push.

    But Linux isn’t like that for independent developers. What they do, is just compile the software on their own machine, boot into the kernel, and if it works it works. This is how some of the Asahi developers would do it, where they would just boot into their new kernel on their macs, and it’s how I’m assuming Overstreet is doing it. Maybe there is some minimal testing involved.

    So Overstreet gets confused when he’s yelled at for not having tested on big endian architectures, because where is he supposed to get a big endian machine that he can afford that can actually compile the linux kernel in less than 10 years? And even if you do buy or emulate a big endian CPU, then you’ll just get hit with “yeah your patch has issues on machines with 2 terabytes or more of ram” and yeah.

    One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU’s.

    But a better option, is to make the testing resources that these corporations use, available to everybody. I think the Linux foundation should spin up a CI/CD service, so people like Kent Overstreet can test their patches on architectures and setups they don’t have at home, and get it reviewed before it is dumped to the mailing list — exactly like what happens at the corporations who contribute to the Linux kernel.

    • FuckBigTech347@lemmygrad.ml
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…

      You’re acting as if setting up a ppc64 VM requires insane amounts of effort, when in reality it’s really trivial. It took me like a weekend to figure out how to set up a PowerPC QEMU VM and install FreeBSD in it, and I’m not at all an expert when it comes to VMs or QEMU or PowerPC. I still use it to test software for big endian machines:

      start.sh
      #!/usr/bin/env sh
      
      if [ "$(id -u)" -ne 0 ]; then
          printf "Must be run as root.\n"
          exit 1
      fi
      
      # Note: The "-netdev" parameter forwards the guest's port 22 to port 10022 on the host. 
      # This allows you to access the VM by SSHing the host on port 10022.
      qemu-system-ppc64 \
          -cpu power9 \
          -smp 8 \
          -m 3G \
          -device e1000,netdev=net0 \
          -netdev user,id=net0,hostfwd=tcp::10022-:22 \
          -nographic \
          -hda /path/to/disk_image.img \
      #    -cdrom /path/to/installation_image.iso -boot d
      

      Also you don’t usually compile stuff inside VMs (unless there is no other way). You use cross-compilation toolchains which are just as fast as native toolchains, except they spit out machine code for the architecture that you’re compiling for. Testing on real hardware is only really necessary if you’re like developing a device driver, or the hardware has certain quirks to it that are just not there in VMs.

    • fruitcantfly@programming.dev
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      16 hours ago

      Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…

      It’s not that BCacheFS would fail on big endian machines, it’s that it would fail to even compile, and therefore impacted everyone who had it enabled in their build. And you don’t need actual big endian hardware to compile something for that arch: Just now it took me a few minutes to figure what tools to install for cross-compilation, download the latest kernel, and compile it for a big endian arch with BCacheFS enabled. Surely a more talented developer than I could easily do the same, and save everyone else the trouble of broken builds.

      ETA: And as pointed out in the email thread, Overstreet had bypassed the linux-next mailing list, which would have allowed other people to test his code before it got pulled into the mainline tree. So he had multiple options that did not necessitate the purchase of expensive hardware

      One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU’s.

      It does not sound to me like standards were dropped for Asahi, nor that their use of Rust had any influence on the standards that were applied to them. It is simply as you said: What’s the point of testing code on architectures that it explicitly does not and cannot support? As long as changes that touches generic code are tested, then there is no problem, but that is probably the minority of changes introduced by the Asahi developers