On 18 Oct 2011 at 10:28, David Cantrell wrote:
I wish there was a widely available, widely known
general purpose
programming language which had a "number with precision" type, which
would Do The Right Thing when doing things like multiplying values
together - ie, decreasing the precision - and would let you specify a
required minimum precision for results or function parameters, with
exceptions being thrown if those can't be achieved. It would be nice
to be able to automate away the problem of "measure with a micrometer,
mark with chalk, cut with an axe".
Should be the first lesson in bonehead numerical methods. An answer
with 20 digits, none of which is significant.
Are there any modern hardware platforms that allow the user to
specify the "noise digit" (using an ancient term for lack of a modern
one) in floating-point operations? That is, can the user specify the
bit pattern that is to be used when filling in nonsignificant
positions during the process of normalizing?
It used to be a useful exercise running the same calculations using
say, all zeroes as normalizing fill and then repeating using all ones
(or zeroes and nines, if you like to count on your fingers).
--Chuck