I think it?s also telling that the IETF uses the term octet in all of the specifications
to
refer to 8-bit sized data. As ?byte? (from older machines) could be anything and is
thus somewhat ambiguous.
It *may* have been the IBM 360 that started the trend of Byte == 8-bits as the 360?s
memory (in IBM?s terms) was byte addressable and the instructions for accessing
them were ?byte? instructions (as opposed to half-word and word instructions).
TTFN - Guy
On Jan 6, 2019, at 10:19 AM, Noel Chiappa via cctalk
<cctalk at classiccmp.org> wrote:
From: Grant Taylor
Is "byte" the correct term for 6-bits?
I thought a "byte" had always
been 8-bits.
I don't claim wide familiary with architectural jargon from the early days,
but the PDP-10 at least (I don't know about other prominent 36-bit machines
such as the IBM 7094/etc, and the GE 635/645) supported 'bytes' of any size,
with 'byte pointers' used in a couple of instructions which could extract and
deposit 'bytes' from a word; the pointers specified the starting bit, and the
width of the 'byte'. These were used for both SIXBIT (an early character
encoding), and ASCII (7-bit bytes, 5 per word, with one bit left over).
I would have blindly substituted "word"
in place of "byte" except for
the fact that you subsequently say "12-bit words". I don't know if
"words" is parallel on purpose, as in representing a quantity of two
6-bit word.
I think 'word' was usually used to describe the instruction size (although
some machines also supported 'half-word' instructions), and also the
machine's 'ordinary' length - e.g. for the accumulator(s), the quantum of
data transfer to/from memory, etc. Not necessarily memory addresses, mind -
on the PDP-10, those were 18 bits (i.e. half-word) - although the smallest
thing _named_ by a memory addresses was usually a word.
Noel