Another silly mental ramble.
I'm going to show my age by wondering if calling "scientific notation"
"floating point" is just something we've learned to do without thinking too
hard about it. But it seems to me that there's a diffference.
Suppose I have a numeric field and it's 10 decimal digits wide (think of a
cheap calculator display). If I position a decimal point anywhere within
the 10 digit field, I can represent some nonzero numbers between 9 999 999
999 and .000 000 000 1 (if my model allows for a sign, I can also represent
the same number of negative values). Note that the number of magnitudes I
can represent is finite: exactly 10^11, including zero. (Also note that
there are 10 possible representations of positive zero) This is what used
to be meant by "floating point" in the old days--and I suspect that some
modern embedded systems still use a type of this to save on cycles and
space.
Scientific notation (where an exponent is kept separately) on the other
hand, can express a much greater number of values because the point is not
restricted to +/-10 digits either side of the units' position. So why do
we persist in calling it "floating point"?
Another common mistake is hearing someone refer to a number as being "fixed
point" when what's really meant is "integer". True, all integers can
be
expressed as fixed-point numbers, but not all fixed-point numbers can be
expressed as integers.
Please forgive the musing.
Cheers,
Chuck