I use bit masks, suck it! (Really though, programming on an embedded CPU might be reasonable to do this, depending on the situation, but on a PC, trying to not waste bits wastes time)
Store only bits using word-length ints (32 bits in most modern architectures), and program everything to do math using arrays of 32 int-bits to numbers!
I once had to reverse engineer a database to make an invoice integration. They had an int named flags. It contained all status booleans in the entire system. Took me a while to figure that one out.
x86 has bit manipulation instructions for any bit. If you have a book stored in bit 5 it doesn’t need to do anything masking, it can just directly check the state of bit 5. If you do masking in a low-level programming language to access individual bits then the compiler optimization will almost always change them to the corresponding bit manipulation instructions.
So there’s not even a performance impact if you’re cycle limited. If you have to operate on a large number of bools then packing 8 of them in bytes can sometimes actually improve performance, as then you can more efficiently use the cache. Though unless you’re working with thousands of bools in a fast running loop you’re likely not going to really notice the difference.
But most bool implementations still end up wasting 7 out of 8 bits (or sometimes even 15 out of 16 or 31 out of 32 to align to the word size of the device) simply because that generally produces the most readable code. Programming languages are not only designed for computers, but also for humans to work on and maintain, and waisting bits in a bool happens to be more optimal for keeping code readable and maintainable.
That bools are stored in 8 bits rather than 1 is a compiler detail. I don’t really see how this improves readability, unless you mean that of the compiled binary.
Only because 6502 has no BIT immediate – only BIT zero page and BIT absolute. But the contemporary z80 and gameboy cpu too have dedicated bit instructions, e.g. BIT c,6 (set z flag to bit 6 of register c).
I think it’s intended for checking the same bit in multiple bytes. You load the mask instead of the data.
So much 6502 ASM involves turning your brain inside-out… despite being simple, clever, and friendly. Like how you can’t do a strided array sensibly because there’s no address register(s). There is no “next byte.” Naively, you want separate varied data at the same index is separate arrays. Buuut because each read address is absolute, you can do *(&array+1)[n], for free.
What I really miss on NES versus Game Boy is SWAP.
I use bit masks, suck it! (Really though, programming on an embedded CPU might be reasonable to do this, depending on the situation, but on a PC, trying to not waste bits wastes time)
exactly! it is more costly for your pc cpu to check for a bit inside a byte, than just get the byte itself, because adresses only point to bytes
Store 8 bits in the same byte then 👌
Wrong direction!
Store only bits using word-length ints (32 bits in most modern architectures), and program everything to do math using arrays of 32 int-bits to numbers!
Oh man! That took me down memory lane!
I once had to reverse engineer a database to make an invoice integration. They had an int named flags. It contained all status booleans in the entire system. Took me a while to figure that one out.
We’ve all been there, friend. The bit arrays can’t hurt you now.
Unlikely. Most of the time on modern hardware, you’re going to be cache-limited, not cycle-limited. Checking one bit in a register is insanely fast.
x86 has bit manipulation instructions for any bit. If you have a book stored in bit 5 it doesn’t need to do anything masking, it can just directly check the state of bit 5. If you do masking in a low-level programming language to access individual bits then the compiler optimization will almost always change them to the corresponding bit manipulation instructions.
So there’s not even a performance impact if you’re cycle limited. If you have to operate on a large number of bools then packing 8 of them in bytes can sometimes actually improve performance, as then you can more efficiently use the cache. Though unless you’re working with thousands of bools in a fast running loop you’re likely not going to really notice the difference.
But most bool implementations still end up wasting 7 out of 8 bits (or sometimes even 15 out of 16 or 31 out of 32 to align to the word size of the device) simply because that generally produces the most readable code. Programming languages are not only designed for computers, but also for humans to work on and maintain, and waisting bits in a bool happens to be more optimal for keeping code readable and maintainable.
That bools are stored in 8 bits rather than 1 is a compiler detail. I don’t really see how this improves readability, unless you mean that of the compiled binary.
Even on 6502, the BIT command is useless 99% of the time, and AND ~which_bit is the right answer.
Interestingly the Intel MCS-51 ISA did have several bit-addressable bytes. Like a weird zero page.
Only because 6502 has no BIT immediate – only BIT zero page and BIT absolute. But the contemporary z80 and gameboy cpu too have dedicated bit instructions, e.g. BIT c,6 (set z flag to bit 6 of register c).
I think it’s intended for checking the same bit in multiple bytes. You load the mask instead of the data.
So much 6502 ASM involves turning your brain inside-out… despite being simple, clever, and friendly. Like how you can’t do a strided array sensibly because there’s no address register(s). There is no “next byte.” Naively, you want separate varied data at the same index is separate arrays. Buuut because each read address is absolute, you can do *(&array+1)[n], for free.
What I really miss on NES versus Game Boy is SWAP.