That depends on whether you trust the datapath between
the memory and
the CPU. In the 1620, the memory is a separate cabinet, connected by
cables to the CPU cabinet. A wise designer would run parity on that
interconnect. That's still true: high end "system on a chip" designs
have ECC memory AND parity (at least) on the buses -- even if they
only run inside the chip.
Most IBMs do quite a lot of checking on just about every datapath. I think
this all started with the S/360. Many even have duplicate ALUs for
checking.
Oh, and of course processors don't ever figure
memory parity in
software, only in hardware, so "waste of CPU" can't apply.
As for sign bit per digit, in the 1620 the "sign" bit serves two
purposes -- on the least significant digit it's the sign of the
number, on the most significant digit it's the "this is the last
digit" marker.
But the original argument was that sixbit is great for numeric data. It is
not, no matter how you look at it. How the 1620 popped up is still a
little odd.
Later IBM though dispose of the sign/marker bit - variable length words
(BCD or character) were treated with a start address and a stop address
(or a length). One has to be careful or a large addition could overwrite
itself.
William Donzelli
aw288 at
osfn.org