On Thu, Feb 19, 2015 at 10:11:32AM -0800, Chuck Guzis wrote:
On 02/19/2015 06:24 AM, Peter Corlett wrote:
C will run on all sorts of bizarre machines, but
somebody has to bother to
implement it, and if the architecture is weird enough that the language has
to be contorted in unexpected ways, it will break assumptions made in
typical C code. ISTR that current versions of the standard assume a binary
machine that provides particular word width, but earlier versions give much
more flexibility.
Do let me know when you've got C for an IBM 1620. SIMH has a
pretty good
emulator for that machine.
I have no particular interest in the IBM 1620, so don't plan to spend any
effort on finding or creating a C compiler for it.
That modern
compilers don't support obsolete machines isn't a surprise. I
can't find a decent modern C compiler that targets m68k, for example, even
though that architecture is still just about clinging on to life.
Again, one needs
to ask "why are they considered obsolete now and not then?"
For example, if IBM could have simplified the 7000-series machines to a
single 7090-type architecture, they could have saved money by not
implementing the 7070, 7080, etc.
The 7000-series is obsolete because it consists of several incompatible
architectures, splitting the software market, and then IBM rendered it obsolete
anyway with S/360.
There's probably also something odd about the design that made it unsuited for
high-level languages, but I can't quickly find a good description of its ISA
and don't feel like downloading half of Bitsavers.
C is a great high-level assembly language for a
certain class of
architectures, I will admit.
It's bloody awful for all of them :)
The problem with C (and to a lesser extent C++) is the
lack of typing by
usage. Does an int hold a character, boolean value, index, bit sequence or
what? You can alleviate this to some extent with typedefs, but that doesn't
seem to be all that prevalent. Indeed, one indicator of that problem is the
"nUxi" problem when early developers were porting that particular OS code.
If I have to write C at all, I prefer to write the useful subset of C++ so I
can get better type abstraction. C++11 has "strongly typed enums" which are
handy but also slightly half-jobbed in their design and can be infuriating to
use. This makes them fit in just fine with the rest of C++.
C does well with character addressing, particularly if
a word/int is an
integral multiple of characters in length. But not so well with
bit-addressing, even though bit-addressable architectures can be very useful
(as in vector machines).
C does support bitfields, but can't take the address of one. The latter is a
fairly uncommon requirement though, and so algorithms that require it will have
to roll their own using mask and shift operations. Given that x86 doesn't have
bitfield instructions and has to fake it with mask and shift, this is no great
loss in practice. It wouldn't surprise me if there was a bitfield_ptr<> in
Boost which did this.