On 11/6/2005 at 10:15 AM Gil Carrick wrote:
Since the decimal digits only took 4 bits and the word
mark was a 7th bit
there were two bits to carry the sign. This machine was actually designed
to be an off-line card reader/ punch/ print processor using tapes to
offload
the biger 7094 type machines. So it was very card
oriented. The other two
bits represented the "zone" punches off the card. I can't remember which
combination(s) was considered a negative number, but it was the same for
any machine reading cards.
Not sure if spooling was the designed purpose of the 1401, but there was an
official IBM acronym for the 1401-7090 type combination. But there were
plenty of standalone 1401's.
The 11 punch would be the negative sign indicator, changing 1, 2, 3, 4,
etc. to J, K, L, M. On the earlier decimal machines, ABCDEFGHI is followed
immediately by JKLMNOPQ. When EBCIDIC was introduced, the card code
mapping was preserved, leading to annoying gaps in the alphabetic collating
sequence (e.g. A=C1, B=C2, but J=D1, K= D2, with CA-D0 unassigned). IMOHO,
this is one of the strong points of ASCII--alphabetics are contiguous in
the code.
The 12 punch (the next row toward the top of the card) was used for
ABCDEFGHI in combination with 1-9 numeric punches. The 0 punch (the next
row toward the bottom of the card from the 11 row) was used for STUVWXYZ
(0-2, 0-3, etc).
I suspect that card code (12 positions per column) was one of the reasons
behind the proliferation of binary machines with words that were a
multiples of 12 bits in the 1950's and 1960's.
Cheers,
Chuck