It was thus said that the Great Jecel Assumpcao Jr. via cctalk once stated:
Sean Conner via cctalk wrote on Mon, 10 Apr 2017
17:39:57 -0400
What about C made it difficult for the 432 to
run?
-spc (Curious here, as some aspects of the 432 made their way to the 286
and we all know what happened to that architecture ... )
C expects memory addresses to look like integers and for it to be easy
to convert between the two. If your architecture uses a pair of numbers
or an even more complicated scheme then you won't be able to have a
proper C but only one or more less than satisfactory approximations.
Just because a ton of C code was written with that assumption doesn't make
it actually true. A lot of C code assumes a byte-addressable, two's
compliment architecture but C (technically Standard C) doesn't require
either and goes out of its way to warn programmers *not* to make such
assumptions.
The C Standard is very careful to note what is and isn't allowed with
respect to memory and much of what is done is technically illegal and
anything can happen.
The iAPX432 and 286 used logical segments. So there is
no sequence of
increment or decrement operations that will get you from a byte in one
segment to a byte in another segment. For the 8086 that is sometimes
true but can be false if the "segments" (they should really be called
relocation registers instead) overlap.
Given:
p1 = malloc(10);
p2 = malloc(65536);
There is no legal way to increment *or* decrement one to get to the other.
It's not even guarenteed that p2 > p1.
Another feature of C is that it doesn't take types
too seriously when
dealing with pointers. This means that a pointer to an integer array and
a pointer to a function can be mixed up in some ways.
This is an issue, but mostly with K&R C (which had even less type checking
than ANSI C). These days a compiler will warn if you try to pass a function
even with *no* cranking of the warning levels.
Yes, C has issues, but please try not to make ones up for modern C.
But if the point was, back in the day (1982) that this *was* an issue,
then yes, I would agree (to a point). But I would bet that had the 432 been
successful, a C compiler would have been produced for it.
If an application
has been written like that then the best way to run it on an
architectures like these Intel ones is to set all segments to the same
memory region and never change them during execution. This is sometimes
called the "tiny memory model".
https://en.wikipedia.org/wiki/Intel_Memory_Model
Most applications keep function pointers separate from other kinds of
pointers and in this case you can set the code segment to a different
area than the data and stack for a total of 128KB of memory (compared to
just 64KB for the tiny memory model).
The table in the page I indicated shows options that can use even more
memory, but that requires non standard C stuff like "far pointers" and I
don't consider the result to be actually C since you can't move
programer to and from machines like the VAX or 68000 without rewriting
them.
"Far" pointers exist for MS-DOS to support mixed memory-model programming,
where library A wants object larger than 64K while library B doesn't care
either way. Yes it's a mess but that's pragmatism for you.
But there's still code out there with such remnents, like zlib. For
example:
ZEXTERN int ZEXPORT inflateBackInit OF((z_stream FAR *strm, int windowBits,
unsigned char FAR *window));
ZEXTERN, XEXPORT, OF and FAR exist to support different C compilers over the
ages. And of those, XEXTERN and XEXPORT are for Windows, FAR for MS-DOS (see a
pattern here?) and OF for pre-ANSI C compilers.
-spc