Umm 6 bits is perfect for BCD, look at IBM's 1620
: 4 bits BCD, 1 bit
sign flag/length flag 1 bit parity
Very inefficient. I hope you are not serious.
Back in the 1960s, most data was numeric, due to banking. It may still be
the dominant form (with maybe porn mpgs a close second). Each field of a
database might have 10, 12, perhaps 16 BCD characters. Why on Earth would
you want a sign bit associated with each one? If the field even needed a
sign, only one would be needed. Parity? That is the job of the memory
controller - having the processor figure out parity is just a waste of
CPU.
You need to realize that back in the 1960, each bit was counted with a
price tag. A medium sized mainframe might only have 64K with a few tens of
megs on disk. A batch might take all night to run, with no time for
fooling around with extra bits.
Basically, sixbit died when it should have. The DEC 36 bit line suffered
from really bad timing (the S/360 was being planned,
unknown to DEC, when
the PDP-6 was being wheeled out. The S/360 made the world 8
bits, and
signed the PDP-6/10s death certificate.).
The IBM 360 and I think marketing ... bytes give you
4x bigger memory
size, 1/4 the cost and
1/9 real $ savings over 36 bit words.
I don't understand this - clarify?
William Donzelli
aw288 at
osfn.org