• Tolookah@discuss.tchncs.de
    link
    fedilink
    arrow-up
    36
    ·
    2 days ago

    I use bit masks, suck it! (Really though, programming on an embedded CPU might be reasonable to do this, depending on the situation, but on a PC, trying to not waste bits wastes time)

    • Binette@lemmy.ml
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      exactly! it is more costly for your pc cpu to check for a bit inside a byte, than just get the byte itself, because adresses only point to bytes

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Unlikely. Most of the time on modern hardware, you’re going to be cache-limited, not cycle-limited. Checking one bit in a register is insanely fast.

      • ByteSorcerer@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        x86 has bit manipulation instructions for any bit. If you have a book stored in bit 5 it doesn’t need to do anything masking, it can just directly check the state of bit 5. If you do masking in a low-level programming language to access individual bits then the compiler optimization will almost always change them to the corresponding bit manipulation instructions.

        So there’s not even a performance impact if you’re cycle limited. If you have to operate on a large number of bools then packing 8 of them in bytes can sometimes actually improve performance, as then you can more efficiently use the cache. Though unless you’re working with thousands of bools in a fast running loop you’re likely not going to really notice the difference.

        But most bool implementations still end up wasting 7 out of 8 bits (or sometimes even 15 out of 16 or 31 out of 32 to align to the word size of the device) simply because that generally produces the most readable code. Programming languages are not only designed for computers, but also for humans to work on and maintain, and waisting bits in a bool happens to be more optimal for keeping code readable and maintainable.

        • jsomae@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          10 hours ago

          That bools are stored in 8 bits rather than 1 is a compiler detail. I don’t really see how this improves readability, unless you mean that of the compiled binary.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Even on 6502, the BIT command is useless 99% of the time, and AND ~which_bit is the right answer.

      Interestingly the Intel MCS-51 ISA did have several bit-addressable bytes. Like a weird zero page.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Only because 6502 has no BIT immediate – only BIT zero page and BIT absolute. But the contemporary z80 and gameboy cpu too have dedicated bit instructions, e.g. BIT c,6 (set z flag to bit 6 of register c).

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          I think it’s intended for checking the same bit in multiple bytes. You load the mask instead of the data.

          So much 6502 ASM involves turning your brain inside-out… despite being simple, clever, and friendly. Like how you can’t do a strided array sensibly because there’s no address register(s). There is no “next byte.” Naively, you want separate varied data at the same index is separate arrays. Buuut because each read address is absolute, you can do *(&array+1)[n], for free.

          What I really miss on NES versus Game Boy is SWAP.

  • ch00f@lemmy.world
    link
    fedilink
    arrow-up
    27
    arrow-down
    1
    ·
    2 days ago

    I’ve been working on disassembling some 8-bit code from the 90s. Fuckers returned bits from functions using the overflow bit. Nuts.

  • jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    25
    ·
    edit-2
    2 days ago

    Use bit-fields:

    struct {
      bool a : 1;
      bool b : 1;
      bool c : 1;
      //...
    };
    

    Edit: careful not to use a 1-bit signed int, since the only values are 0 and -1, not 0 and 1. This tripped me up once.

    • Strawberry@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      In a world where a bigger memory chip is more expensive by only a few cents where this would be most useful, is this feature still relevant?

      • kora@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        2 days ago

        Yes, firmware running on bare metal requires good resource management. My current development board processor contains 512KB SRAM. That’s equivalent to half of the size of an average PDF.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Yes, because cache optimization is still important. Also useful to keep the size of packets down, to reduce the size of file formats, and anywhere that you use hundreds of thousands of instances of the struct.

        • Strawberry@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          15 hours ago

          For the packet size and fils format issues, it seems like this language feature would be less reliable than bit shifting or masking, given that different implementations may store the bits in a different order or not compactly

  • lorty@lemmy.ml
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    If you want to optimize to this point, do some embedded development. It’s somewhat fun to work at such a low level (testing tends to be annoying though)