Teaching Approximations (was Re: Microcode, which is a no-go for

Jon Elson elson at pico-systems.com
Wed Jan 9 11:43:12 CST 2019


On 01/09/2019 07:49 AM, Paul Koning via cctalk wrote:
>
> Understanding rounding errors is perhaps the most 
> significant part of "numerical methods", a subdivision of 
> computer science not as widely known as it should be. I 
> remember learning of the work of a scientist at DEC whose 
> work was all about this: making the DEC math libraries not 
> only efficient but accurate to the last bit. Apparently 
> this isn't anywhere near as common as it should be. And I 
> wonder how many computer models are used for answering 
> important questions where the answers are significantly 
> affected by numerical errors. Do the authors of those 
> models know about these considerations? Maybe. Do the 
> users of those models know? Probably not. paul 
A real problem on the IBM 360 and 370 was their floating 
point scheme.  They saved exponent bits by making the 
exponent a power of 16, instead of 2.  This meant that the 
result of any calculation could end up normalized with up to 
3 most-significant zeros.  That would reduce the precision 
of the number by up to 3 bits, or a factor of 8.  Some 
iterative solutions compared small differences in successive 
calculations to decide when they had converged sufficiently 
to stop.
These could either stop early, or run on for a long time 
trying to reach convergence.

IBM eventually had to offer IEEE floating point format on 
later machines.

Jon


More information about the cctalk mailing list