Edit: this post and my questions within were poorly formulated, mostly because I made the assumption that there is a correlation between common word sizes in CPU architecture and why I couldn’t find decimal to signed binary converters online that allow me to set “word size”/number of bits I want to work with.
I am a complete beginner in the field of computers.
I am reading Code - the hidden language of computer hardware and software by Charles Petzold (2009) and I just learned how we electronically express the logic of subtraction without using a minus sign or an extra bit to indicate positive/negative: we use two’s complement (yes, I realize that the most significant bit incidentally acts as the sign bit, but we don’t need an extra bit). Anyway, I experimented with trying to convert both decimal and binary values into their signed counterparts, just as an exercise. To be sure that I wasn’t doing anything wrong, I wanted to double check my calculations with some “decimal to signed binary calculators” on the Internet.
I was trying to express -255 in signed binary using 10 bits. I wanted to use only 10 bits because I wanted to save on resources. To express the 1000 possible values between -500 and 499, I only need 10 bits, which unsigned goes between 0 and 1023. I calculated -255 to be 1100000001 in 10-bit signed binary (because 255 is 0011111111, which you invert to get to one’s complement and finally you add 1).
I couldn’t find any converters on the Internet that allows me to set the maximum value/length, in this case 10 bits. I found a few that are 8 bit and a few that are 16 bit, which made me think of our gaming consoles that to my knowledge evolved in increments of 8, 16, 32, 64.
I understand that we use binary to express Boolean logic and arithmetics in electronics because regulating voltage to have transistors be in one of two values is consistent with the true/false values of Boolean logic and because of the technical difficulties in maintaining stable voltages in ternary and above.
But why didn’t I find any converters online that allow me to set the bit length? Why did the gaming consoles’ maximum bit length evolve in those specific increments? Are there no processor architectures of other values than these?


The reason it’s like that is because if people want to use the same machine, but store different sizes of information (ex someone wants to store 5 bits, while another wants to store 24), 2^n is the best way to fetch that information quickly.
For example, let’s say you have a memory that has as many sets as you want, but with only one bit in each set. I store information that is 2 bits of size. I can split the 2 bit information into two and store each bit in a set that’s one index appart. So if I wanted to read the information, I’d just read 0 and 1, 10 and 11, 100 and 101. This follows a rather simple pattern, where the leftmost bit to the one before the rightmost is the index of each information packet, and the rightmost just signals if it’s the first or the last bit of the packet.
For example, if I have 11 01, the memory would look a bit like this:
00: 1
01: 1
10: 0
11: 1
If I want to get the first packet , I just have to ask: what data has the leftmost bit to 0? We can add as many more information as we want, and it would still follow.
If you were to send information with 3 bits of size, or any that isn’t a power of 2, you wouldn’t get an easy adressing pattern. If I were to send, for example, 101 110, I would get something like this:
000: 1
001: 0
010: 1
011: 1
100: 1
101: 0
There is no pattern I can take out of the indexing of the memory to access the information. Where when I send an information that is 2 bits of size, I can take n-1 bits from the left and index it, I can’t do that for information that isn’t sent at 2^n (3, 5, 10, etc.)
The sollution, of course, would be to have the memory sets be of size 3, but we’d run into the same “problem” if the information received is not base 3. Heck, we’d run into another problem that is similar, but is more hidden in the sets themselves rather than the indexing.
Let’s say we want to put information that is 1 bit lenght in memory that has infinite sets that are 3 bits of length, and i put in 1 0 1 1
0: 101
1: 001
I can’t easily put in a pattern either. If I want to get the second information ( index 1), I would have to do 1 / 3 to check if it goes in the first or second memory adress, then, i would have to do 1 % 3 to check what position it’s at. If I wanted to get the 4th information (index 3), for example, I would get 3 / 3 = 1, then 3 % 3 = 0, so second set, index 0. Granted, both operations are done in one division operation, but it’s still slower than just shifting bits.
One could also just skip one bit if they receive 1 bit information with 3 bit sized sets. The memory would then look like this:
0: 001
1: 101
You could then just access the nth information by taking the leftmost bits for the index of the memory, then the right most bit to see if you should take the first or the third bit. For example, I want to take the 4th information (third index, 11). 1, the left bit, is the index of the set, and 1, the right one, says we need to take the 3rd bit.
This is better, but then we’d need to calculate how much space is given for different sizes of information. Four bits would have 2, 5 bits would have 1, 1 bit has 1. The formula here would indicate 3 - (n % 3) bits. That needs another modulo for it, so while accessing it is less of a problem, determining what space it needs requires another weird computation.
A final example, putting 1 bit information in two bit sized sets gives us this ( with the same input as before )
0: 01
1: 11
The third bit (index 10) can be accessed by taking set 1 (leftmost bits) at the position 1 (rightmost bit), which does give us the bit 1, the third bit of 1011.
Now if we were to store information of size 3, we’d have to use the same technique as with storing 2 bits in 3 bits: adding spaces.
Let’s say we want to store 011 101 in 2 bit sized sets:
00: 01
01: 01
10: 10
11: 01
To determine the numbers of space, we have to do 3 % 2, but the operation % 2 is very easy to do for computers, since you just take the last bit (the rule follows for % 2^n: you take the last n bits). Next, if I want to access the second information (index 1), I just mutliply the index by 2 (easy to do for computers, since it’s just a bit shift), then take the current block and the block right after it. So 10 and 11, which give me 101.
Keep in mind, this is only for machines that are made to use, as optimally as possible, any information at any bit sizes. If you have only 5 bit size information, there is no use for you to stick to a 2^n size, as you figured.
This explanation was also me just pulling out counter-examples on the fly, and I’m not in the best of states, so if there are passages that seem a bit weird or don’t explain things very well, please let me know.
Edit: formating