On Saturday, December 06, 2014 8:21 PM Chuck Guzis
wrote
On 12/06/2014 06:55 PM, Noel Chiappa wrote:
> From: Jerome H. Fine
> I would point out that one reason that memory may have stayed with 2 **
> 30 is that memory is usually produced and sold in multiples of 2 ** 30
> these days
>
> Main memory has pretty much _always_ been sold in blocks that were
> exact powers of two, for obvious reasons (at least, powers of two of
> the word size of the machine in question)...
Let's see; IBM 1620--basic memory size=20,000
digits, increments of
20,000 digits up to 60,000. IBM 705, 7080,...
What is curious is the marketing numbers used for the memory size. 65K,
131K, etc.
I believe IBM was guilty of this in their S/360 marketing
literature.
The numbers sound bigger.
--Chuck
Welcome to the debate over who is responsible for the confusion in the use
of binary versus decimal prefixes; it has been going on at Wikipedia since
2001, see:
https://en.wikipedia.org/wiki/Binary_prefix and
https://en.wikipedia.org/wiki/Timeline_of_binary_prefixes
Actually I think IBM was fairly rigorous in using decimal prefixes, K
meaning 1,000, with annotation K=1024 when appropriate as in the Amdahl
article on S/360 architecture. I doubt if it had anything to do with
appearance. More importantly I am pretty sure that into the 1980s (perhaps
later) in its product literature, product specs and operating system
utilities IBM used decimal digits with no prefixes at all.
IMO the mess really started with Apple's Macintosh which reported memory and
disk capacity using K in a binary sense without any qualification. I can't
prove a negative, but I think no OS prior to the Mac OS used any prefixes at
all; they simply displayed or printed a decimal number to however many
digits necessary, sometimes without commas. It will be interesting to see
what this group recalls
If u think about it, mixing decimal digits with binary prefixes makes little
sense and probably has caused all sorts of problems and confusions such as
the infamous 1.44 MB FD. I've always wondered why the programmer at Apple
didn't use decimal prefixes and avoid all this nonsense. After all there
isn't a lot of difference in coding between a binary shift followed by a
decimal conversion as Apple did it and a decimal conversion followed by a
decimal shift.
Tom