On 18/10/11 5:28 AM, David Cantrell wrote:
On Mon, Oct 17, 2011 at 09:46:41PM +0100, Tony Duell
wrote:
"Get
a better calculator/computer/floating point co-processor"
"Just use 'double precision'"
Ah yes... Akin to the
'solution' of throwing ever faster processors at a
problem in the hope it'll go away'... Of course neither actually solves
the underlying problem.
Yes it does. The underlying problem is almost never "achieve
perfection" but is "get a result accurate enough to let me build this
bridge and have it not fall down" or "make the code work fast enough".
The tricky bit is that once you've got a good enough approximation for,
eg, sqrt(2), which you then use, to bear in mind that the result is an
approximation, and that if you then perform some other operation with
that result and with another approximation that the errors grow, and
grow, and grow, and then DOOM.
I wish there was a widely available, widely known general purpose
programming language which had a "number with precision" type, which
would Do The Right Thing when doing things like multiplying values
together - ie, decreasing the precision -
Significance arithmetic. Steve Richfield* couldn't get the IEEE-754
reboot committee to consider it, but one can find his interesting Usenet
posts on the topic.
--T
* - iirc, among other things, he did a lot of work on CDC Fortran.
and would let you specify a
required minimum precision for results or function parameters, with
exceptions being thrown if those can't be achieved. It would be nice to
be able to automate away the problem of "measure with a micrometer, mark
with chalk, cut with an axe".