On 27 Jun 2011 at 11:32, Rich Alderson wrote:
Why would anyone waste 25 bits per word to represent
ASCII text????
(The hardware includes byte manipulation instructions which work very
nicely.) And tape frames are 8 bits + parity, which loses a bit per
7-bit character so why do that? Let the tape controller handle
packing a 36 bit word into 5 tape frames, and stop worrying about it.
You misunderstand the convention. Just as there were binary punched
cards and "coded' punched cards, there were also binary tapes and
"coded" tapes. It was often the case that the tape formatter on old
mainframes could perform the necessary bit-twiddling on the fly.
So the common interchange standard for "coded" 9 track tapes was one
8-bit character (meaning a character, not just a collection of bits)
per frame. Any operating system was capable of writing such a tape.
Consider that I had to read the tape on a CDC 6600. I couldn't use
the "coded" standard interchange format (it would have been a simple
copy operation", but I had to deal with the screwball DEC 10 binary
format. Further, consider that the CDC 6600 used 60-bit words and
was not byte-addressable. So, I had to read 15 frames at a time,
which corresponded to 2 60-bit words. However, those 120 bits were 3
1/3 PDP-10 words, from which I had to unpack 16 2/3 7-bit ASCII
characters and convert them to 6 bit display code (hope I've got my
math right).
You get the idea. Conversion between mainframe formats could be a
lot of fun.
--Chuck