Am 14 Nov 2003 5:45 meinte Eric Smith:
Hans wrote:
> to me it's the way the memory is handled
> that makes the 8086 the great CPU it is
I wrote:
> What, the 64K segments that alias on paragraph boundaries?
> Yecch! What a kludge! The PDP-11 had better memory management
> for a 64KB address space at least seven years earlier.
Hans wrote:
> Excuse me? The PDP-11 is a classic example for an external
> MMU, completely invisible to a user task. Nice if you just
> want to run old Software that expects a 28K addres space.
> But without expesive OS calls, and the MMUs equivalent of
> bank switching, it just allows to access said 28K (well,
> 32K since the IO may have been not mapped in).
Exactly. That's what's nice about it. You
can map eight
arbitrary 8K byte areas into the process address space without
the process even needing to know about it. (Sixteen if you have
separate I&D). If the code needs access to more than that, it
can make requests to the OS.
So MMU is still external, like it could be to _any_ CPU,
and not realy an architectural feature.
With the 8086, if the code wants to address ANY
discontiguous
regions of memory as data space, it HAS to deal with those damn
segment registers.
Depends what you mean wit 'dealing' if it's load them with
the proper handle, then yes, think of it as a hardware shortcut
to a set of OS calls. For example if you have two data blocks
and want to copy data between them, you may eiter map them in
in sequence via an OS call, and do a double move, or map both
in (via OS calls) and then copy ... of course after recalculateing
the addresses, since the blocks are now moved within the address
space ... of course you may always use some kind of base and offset
calcualtion to get an effective address ... all in additional
instructions to be programmed - just to get the same result?
Aw, come on.
(And BTW, IBM
had it before :)
Huh?
/360 with virtual extension - don't ask me about the model,
the similar Siemens version has been the 4004-220
> Now, still beeing a strict 16 Bit CPU, this
little trick
> allows a 16 Bit user process to access up to 65K of 65K
> segments.
Not on the 8086, it doesn't. If you want
independent 64K segments,
you get a maximum of 16 of them. It wasn't until the 286 that
they finally introduced proper segmentation.
Now your mixing up design and implementation. The design of
segments operates with 64K possible segments with 64K max
size each, while the implementation on the 8086 did only
use a 20 Bit address bus and thus only allowed 16 independent
_maxed_out_ segments.
> In fact, memory management on classic Macs was
similar,
Not really. The Mac has a single flat address space.
The software
chooses to carve it up into blocks called segments, but that's
similar to the x86 segments in name only.
This is why I said Mac, and not 68k, it's the Mac memory
management that uses so called 'Handles' to refer to
relocable blocks in Heap. While they are in fact just
a pointer to a pointer, no Mac programm should (and can)
assume anything from just looking at the handles content.
And exactly the same view a programm should maintain at
segment values.
> Basicly that's nothing else than using
software to do
> exactly what the 8086 does in hardware.
No, there's not even any *similarity* between how
the Mac Memory
Manager works and 8086 segmentation.
See above. Of course if one always looks at the exact
hardware implementation, and if one always does the
stupid segment x 16 + ofset calcualtion, one will never
get the abstract view.
> The value of the segment has no meaning - why do
people
> always waste their time in calculateing 'real' addresses?
> I don't care for that on a /370, nor on a PDP 11, so why
> should I do so on my PC?
Becaues sometimes you actually need to know whether
two pointers
point to the same object, even when those pointers may have come
from different software that plays fast and loose with the segmentation
rules, which was not uncommon on the 8086.
This sometimes is in an operating system made to fit the 8086
an extreme reare case and only needed within Memory management (*1),
but 99.999% of all programmers will never have to think about it.
I did a whole lot of DOS and Windows programming, 100% assembly,
and have never ever needed to compare pointers (*2). So, if these
pointer compares have not been uncommon, I maybe lived in a different
world... an island of happyness (Munich is spacial, but not that
special :)
Gruss
H.
*1 I wrote once a little reale time system under DOS (in fact
it was the other way around, I just used DOS as loader, then
took over and let it run as a task within my system :). Memory
management was the core to it, and the segments where the key.
It was simple, elegant and extremly fast - basicly it's been
a little application and dial in server for a mainframe system,
I was able to handle 64 19,200 Bd lines, 4 mainframe connections
and a bunch of printer ques simultanious on an 8 MHz 286 (It
was also a real mode application, so only 8086 code, none of
the 286 features where used, since the initial development
has been on a 186 machine) and still sitting around idle most
of the time. On a 12 MHz 386 I could even run the app in 100%
trace mode via the integrated self trace (I some of the features
of the 386 are extremly nice) - I was looking for a memory problem,
where I suspected a segment problem (jep, even I did distrust them),
just when traceing itself and monitoring each segment related
operation, the problem did not happen, so I let the production
machines just trace themselves all the time - after weeks of
additional investigation (inofficial, since the Problem was closed)
I found out that a timeing for a third party hardware was just
randomly not working with 12 MHz ISA bus speed :)
*2 The rule is quite simple, if the segment (aka handle) is not
the same, they are not equal.
--
VCF Europa 5.0 am 01./02. Mai 2004 in Muenchen
http://www.vcfe.org/