I use bit masks, suck it! (Really though, programming on an embedded CPU might be reasonable to do this, depending on the situation, but on a PC, trying to not waste bits wastes time)
exactly! it is more costly for your pc cpu to check for a bit inside a byte, than just get the byte itself, because adresses only point to bytes
Store 8 bits in the same byte then 👌
Wrong direction!
Store only bits using word-length ints (32 bits in most modern architectures), and program everything to do math using arrays of 32 int-bits to numbers!
Oh man! That took me down memory lane!
I once had to reverse engineer a database to make an invoice integration. They had an int named flags. It contained all status booleans in the entire system. Took me a while to figure that one out.
We’ve all been there, friend. The bit arrays can’t hurt you now.
Unlikely. Most of the time on modern hardware, you’re going to be cache-limited, not cycle-limited. Checking one bit in a register is insanely fast.
x86 has bit manipulation instructions for any bit. If you have a book stored in bit 5 it doesn’t need to do anything masking, it can just directly check the state of bit 5. If you do masking in a low-level programming language to access individual bits then the compiler optimization will almost always change them to the corresponding bit manipulation instructions.
So there’s not even a performance impact if you’re cycle limited. If you have to operate on a large number of bools then packing 8 of them in bytes can sometimes actually improve performance, as then you can more efficiently use the cache. Though unless you’re working with thousands of bools in a fast running loop you’re likely not going to really notice the difference.
But most bool implementations still end up wasting 7 out of 8 bits (or sometimes even 15 out of 16 or 31 out of 32 to align to the word size of the device) simply because that generally produces the most readable code. Programming languages are not only designed for computers, but also for humans to work on and maintain, and waisting bits in a bool happens to be more optimal for keeping code readable and maintainable.
That bools are stored in 8 bits rather than 1 is a compiler detail. I don’t really see how this improves readability, unless you mean that of the compiled binary.
Even on 6502, the BIT command is useless 99% of the time, and AND ~which_bit is the right answer.
Interestingly the Intel MCS-51 ISA did have several bit-addressable bytes. Like a weird zero page.
Only because 6502 has no BIT immediate – only BIT zero page and BIT absolute. But the contemporary z80 and gameboy cpu too have dedicated bit instructions, e.g. BIT c,6 (set z flag to bit 6 of register c).
I think it’s intended for checking the same bit in multiple bytes. You load the mask instead of the data.
So much 6502 ASM involves turning your brain inside-out… despite being simple, clever, and friendly. Like how you can’t do a strided array sensibly because there’s no address register(s). There is no “next byte.” Naively, you want separate varied data at the same index is separate arrays. Buuut because each read address is absolute, you can do *(&array+1)[n], for free.
What I really miss on NES versus Game Boy is SWAP.
Solution? Store 8 booleans in 1 byte.
If you put them in the right order, you can store 10 bools in a byte.
kompreshun.
Odd of you to be using base 7.
I’ve been working on disassembling some 8-bit code from the 90s. Fuckers returned bits from functions using the overflow bit. Nuts.
What era was that device? Some old games on NES had to use all kinds of quirks like this to overcome hardware limitation.
It’s in an AlphaSmart. I’m working through disassembling the ROM to add some new features.
Where do you go to talk about such things? Could be fun to have a retro reversing community.
Nice.
Use bit-fields:
struct { bool a : 1; bool b : 1; bool c : 1; //... };
Edit: careful not to use a 1-bit signed int, since the only values are 0 and -1, not 0 and 1. This tripped me up once.
This is both the right and wrong answer
In a world where a bigger memory chip is more expensive by only a few cents where this would be most useful, is this feature still relevant?
Yes, firmware running on bare metal requires good resource management. My current development board processor contains 512KB SRAM. That’s equivalent to half of the size of an average PDF.
Yes, because cache optimization is still important. Also useful to keep the size of packets down, to reduce the size of file formats, and anywhere that you use hundreds of thousands of instances of the struct.
For the packet size and fils format issues, it seems like this language feature would be less reliable than bit shifting or masking, given that different implementations may store the bits in a different order or not compactly
If you want to optimize to this point, do some embedded development. It’s somewhat fun to work at such a low level (testing tends to be annoying though)
Embedded SW dev here; don’t listen to this, fly you fools!