Am 14 Nov 2003 12:08 meinte Tom Jennings:
On Fri, 2003-11-14 at 06:14, Hans Franke wrote:
> > and that's about it. Don't you remember address normalization and
> > comparison woes?
>
> That's rather a fault of Microsoft than Intel.
Excuse me if I sound rude,
Naa, I'm German, it needs a lot to get me considering a response rude :)
but we're either misunderstanding each other,
or you don't understand how Intel 8086 segmentation works.
Maybe I don't, maybe I just survived 20+ years of x86 programming
(including doing a BIU design for an embeded 8086 core) by completly
missing the point so, I'd ask you from the deepest of my heart,
don't tell it to the programms I wrote, they may just stop working,
and I sued to the last penny:)
Address normalization is a hardware problem, not
software (though I'll
gladly blame Microsoft on general principles). Here's a good
description:
That site I have to bookmark, that's a perfect example how
to torture a CPU.
As I said before, I can't see why someone want's to interfer
with the way a CPU hardware generates a memory address, except
of course for the memory manager.
Let's just take a step back, the 8086 is a 16 Bit CPU, 16 Bit,
nothing else. Address size, as seen from a Programm is 16 Bit.
Thus it is able to address a continuous memory of 64K no Byte
more. If you need more than 64K, request an additional segment.
The stupidity of memory models comes from assuming a certain
behaviour of the CPU AT USER PROGRAMM LEVEL - BTW: that's
exactly the reason for the ever doomed A20 gate, asuming that
Sebents are mapped somewhere onto each others, and accessing
an address greater FFFF:0010 results in Addressing 0000:xxxx.
What is so hard in seeing the segments as independent from
real memory? Who cares (on user level) on what real address
your code or data resides? You don't do under any VM system,
so why should you do so under an 8086? All a user process
has to care about is his _own_ address space, which is given
as a set of segment descriptors.
Memory allocation schemes? Small/Medium/Large Model
compiler "options"?
Now, that is something to blame on compiler
developer.
True, it was initiated by Intel, but I can't blame the
CPU or the CPU designer for that. In fact, I never
understood what these 'models' are good for, since they
just define special cases within the only Model the
CPU knows.
Sorry, again incorrect, and if you figure out what segmentation means
in
Intelland, small/medium/large/huge models make unfortunate sense.
Now you're starting to get rude :)
Serious, there's only one model, and that's segment:offset,
I couldn't find for example some hardware for a 'small' model
in all the time I worked with an 8086. the so called small
model is just a special case, where an application owns exactly
one segment, and CS=DS=ES is assumed (since all are loaded with
the same value). Show me where there's a special hardware to
support that, or enforce it? You can't find it? Cool, so do
you still want to argue that it's a hardware feature? Or isn't
it rather something made up by Compiler designers to simplify
their work?
Gruss
H.
--
VCF Europa 5.0 am 01./02. Mai 2004 in Muenchen
http://www.vcfe.org/