Nonetheless, there is a big difference between efficient use of storage and
decimal accuracy. Your last sentence sums it up perfectly, for decimal
numbers BCD is a more accurate representation.
Bob
At 09:03 PM 4/20/2001 -0700, you wrote:
At 06:58 PM 4/20/01 -0500, Joe Rigdon wrote:
...For
one thing, HPs use BCD arithmetic instead of binary so they'll much more
accurate than most other machines unless they use special software.
...
I've heard this claim many times that BCD is more accurate. I'm I just
not understanding something?
Unless you are doing financial work where the fractional numbers tend to
be inherently decimal, BCD arithmetic, for a given number of bytes of
storage, is less accurate than binary. As a BCD byte can represent only
100 states vs 256 for binary, you are going to lose more than one bit of
accuracy per byte of storage. Over a typical 13 nibble mantissa, it comes
to more than 8b wasted. Actually, it is worse than that, for a couple
reason having to do with normalization of the numbers. Firstly, a binary
representation can scale the mantissa to retain every bit possible,
whereas a BCD representation has a 4b granularity on the shifting, so it
probably wastes two more bits there. Also, IEEE floats have an implied
MSB for normalized numbers, so you get one extra bit there. So now you're
probably up to 11 wasted bits in a double precision (8B) BCD
number. Perhaps you can argue back nearly two bits because the exponent
for a BCD number doesn't have to have as many bits as for a binary number
as each count of the exponent results in roughly four bits shifting of the
mantissa. Still, overall, it is still 10 wasted bits.
So, OK, 0.1 (base 10) can't be exactly represented in a binary format, but
0.11111 (base 16) can't be represented exactly in an 8B BCD representation.
-----
Jim Battle == frustum(a)pacbell.net