-----Original Message-----
From: cctalk <cctalk-bounces at classiccmp.org> On Behalf Of Jon Elson via
cctalk
Sent: 09 January 2019 17:43
To: Paul Koning <paulkoning at comcast.net>; General at
ezwind.net;
Discussion at ezwind.net:On-Topic and Off-Topic Posts
<cctalk at classiccmp.org>
Subject: Re: Teaching Approximations (was Re: Microcode, which is a no-go
for
On 01/09/2019 07:49 AM, Paul Koning via cctalk wrote:
Understanding rounding errors is perhaps the most significant part of
"numerical methods", a subdivision of computer science not as widely
known as it should be. I remember learning of the work of a scientist
at DEC whose work was all about this: making the DEC math libraries
not only efficient but accurate to the last bit. Apparently this isn't
anywhere near as common as it should be. And I wonder how many
computer models are used for answering important questions where the
answers are significantly affected by numerical errors. Do the authors
of those models know about these considerations? Maybe. Do the users
of those models know? Probably not. paul
A real problem on the IBM 360 and 370 was
their floating point scheme.
They
saved exponent bits by making the exponent a power of
16, instead of 2.
This meant that the result of any calculation could end up normalized with
up
to
3 most-significant zeros. That would reduce the precision of the number
by
up to 3 bits, or a factor of 8. Some iterative
solutions compared small
differences in successive calculations to decide when they had converged
sufficiently to stop.
These could either stop early, or run on for a long time trying to reach
convergence.
Early machines had to have the floating point units re-worked as IBM found
they had to add guard bits to the calculations to get any thing like decent
accuracy. In general you needed to use double for decent results.
IBM eventually had to offer IEEE floating point format
on later machines.
Jon
Dave