On May 21, 2016, at 7:34 PM, ben <bfranchuk at
jetnet.ab.ca> wrote:
On 5/20/2016 2:58
[4] Say, a C compiler an 8088. How big is a pointer? How big of an
object can you point to? How much code is involved with "p++"?
How come INTEL thought that 64 KB segments ample? I guess they only used
FLOATING point in the large time shared machines.
Because the 808x was a 16-bit processor with 1MB physical addressing. I
would argue that for the time 808x was brilliant in that most other 16-bit
micros only allowed for 64KB physical. If people wanted more they had to
add external hardware and the calling linkage became problematic (I know
because that?s what we did on the IBM S/23 Datamaster that used an 8085
and allowed for 192KB of ROM and 128KB of RAM).
Floating point was not common at the time in micros because of the number
of transistors/gates necessary for the implementation. Intel added it as
a ?coprocessor? in the 8087. When I was at IBM we continually railed on
Intel to make floating point standard so that we could have code that
assumed floating point was always present. It finally happened with the
80486 but then Intel took it away again (sort-of) with the 486-SX which
was brilliant marketing by Intel?initially allowed them to sell ?floor
swept? 486?s with non-functional floating point units?eventually their
process improved and more often than not 486-SX systems that had the
floating point coprocessor actually had 2 fully functional 486 processors!
TTFN - Guy