Chuck Guzis wrote:
Another silly mental ramble.
I'm going to show my age by wondering if calling "scientific notation"
"floating point" is just something we've learned to do without thinking
too
hard about it. But it seems to me that there's a diffference.
Suppose I have a numeric field and it's 10 decimal digits wide (think of a
cheap calculator display). If I position a decimal point anywhere within
the 10 digit field, I can represent some nonzero numbers between 9 999 999
999 and .000 000 000 1 (if my model allows for a sign, I can also represent
the same number of negative values). Note that the number of magnitudes I
can represent is finite: exactly 10^11, including zero. (Also note that
there are 10 possible representations of positive zero) This is what used
to be meant by "floating point" in the old days--and I suspect that some
modern embedded systems still use a type of this to save on cycles and
space.
Scientific notation (where an exponent is kept separately) on the other
hand, can express a much greater number of values because the point is not
restricted to +/-10 digits either side of the units' position. So why do
we persist in calling it "floating point"?
Because the point truly *does* "float".
In your 10 digit calculator example, as numbers get smaller, you
*lose* "resolution" (avoiding terms like "precision"). E.g.,
3.999999999 / 10000000000 = .000000003 (assuming I've counted
decimals correctly). In a floating point representation, this
would be .39999999999 (-10) -- no loss of "resolution" (again,
this is really the wrong choice of words but easier to show).
The point moves TO REMAIN WITH the significant digits.
(N.B. the number of "different values" that FP can represent is
X times the number of equivalent fixed point values that can
be represented -- where X reflects the range of exponents
supported)
Also, note that in some circles, "scientific notation" is restricted
to specifying exponents that are multiples of 3.
And, "floating point" also deals with issues that can't be
expressed in your calculator *or* scientific notation.
Notably, NaN's, gradual underflow, infinities (as well as
how infinities are treated), etc. Some implementations
support the concept of +0 and -0.
Another common mistake is hearing someone refer to a
number as being "fixed
point" when what's really meant is "integer". True, all integers can
be
expressed as fixed-point numbers, but not all fixed-point numbers can be
expressed as integers.
Nor can a particular fixed-point implementation necessarily
represent *any* integers.
Please forgive the musing.