Well the 1620 was a variable length machine ... A
sign/flag bit made
more sence at the time since
you only had as many BCD digits as you needed.
It is still very inefficient, with lots of wasted bits. It would not
matter with a small machine like a 1620, but it does when the system gets
larger. Even a small S/360 dwarfs a 1620. All those wasted bits add up.
The extra bits were hidden but parity was the price
you paid for core
memory at the time
for error checking.
Parity checking is the job of the memory controller, not the processor. In
fact, I am having a hard time thinking of a processor that did its own
parity checking in software (yes, I know any processor could do it, but
did any really do it?). Even if the parity checking is a lowly 74180, like
in a microcomputer - it is still not boggin down the processor. The
processor really doesn't need to know about parity, unless things go bad.
I am not a IBM fan... I support 9 bit bytes. ( Bytes
for a lack of
better name ).
9 is even more inefficient.
The PDP-6/10's may of supported them but other
than the CPU I am building
I can't think of any other computer using them.
Some Univacs.
To clairfy about IBM and bytes from a marketing
standpoint it was a way to
misslead the potential computer buyers from my veiw point that with the new
marketing terms -- byte vs words , 32 vs 36 bits so that IBM's products
would
look better compared to the 7 dwarfs at the time.
Going back a few days to a previous thread about books, I suggest you read
pages 148 and 149 of *IBMs System 360 and Early 370 Computers*. You may
then see that there was no conspiracy against sixbit.
William Donzelli
aw288 at
osfn.org