< But I've been programming under multiple platforms in C for 10 years now.
<I've learned a thing or two about writing portable code. It's surprising
<how little effort it actually takes to write portable code, but it does tak
<a different mind set that most programmers (in my experience) don't have.
For the last 15-20 years I've been trying to resolve portability. Pascal
and C were clear winners over BASIC always. Keep in mind the platforms
I was trying to hit, CPM{z80}, RT-11 and VAX/VMS and for that carefull use
of C (pay attention to what char, byte, long, double and float were) it was
portable to the limit fo doing direct IO which is never portable. Pascal
never failed to be portable as all objects were the same.
< Careful, strings in C are *character* based (what's this ``byte'' you
kee
<talking about? 8-) This is one area where programmers don't quite grasp
<portability issues. While it's true that characters in C must be at least
<bits in size, that doesn't mean they *must* be 8 bits in size; an
<implementation of C that uses Unicode natively could set the size of a
<character to 16 bits (and there is also the issue of whether the character
<is signed or unsigned---a plain char declaration is unspecified---it's
<implementation dependant whether a char is signed or unsigned).
This comes out of the tradidtion of C and unix and for that char is
typically 7-9bits and is really an unsigned BYTE. The 8/16/32 world
really forced a byte to conform to 8bits.
<Intel 386 class takes a penalty if you execute 16-bit instructions in a
<32-bit segment (or vice-versa). So, to move a counted string, you have:
Gee and HLLs were supposed to hide this... ;)
You should see what Z80 code looks like from a C compiler, without the
ability to do the pointer indirection (PDP-11 does index deferred) the
z80 compiler produces a lot of horrid code that resembles some of the
RISC machines but nowhere as efficient. Yet hand coding it in z80 can be
efficient and very few instructions.
< The ANSI C spec states that the Standard C functions can be understood b
<the compiler and treated specially. At least in the 386 line, most str*()
<and mem*() functions compile to inline code and avoid the function call
<overhead (a friend of mine actually triggered a bug in GCC using nested
<strcpy() calls).
<
<> Controll of hardware ? My memory may be fading, just I can not
<> see any reference to hardware controll in my K&R copy. All
<> hardware dependant stuff is proprietary to the compiler you
<> are using. And that's the same way as for example in PASCAL
<
< True---but it depends upon how the hardware is hooked up to the CPU---is
<it memory mapped I/O or I/O mapped I/O? If the former, you just declare a
<pointer to the memory location (mapped to the appropriate size) and go. I
<it's I/O mapped I/O there is probably a wrapper function that the compiler
<knows about and can inline.
K&R C assumes the operators are part of the OS and the language interacts
with the OS for IO. That is a useful concept for nonsystem programming or
end applications. Totally meaningless for driving a A/D card. however,
the right way to do that is to package and isolate the IO so you can use
standard C conventions to interact with the device. This keeps the mainline
code portable.
< But C (the actual language) never defined built-in IO functions, leaving
<I/O to subroutines (or functions). WRITELN is a language element of Pascal
<but printf() is just a function. Depending upon your view, that is either
<good thing or a bad thing (I think the lack of I/O statements in C is an
<elegant solution myself).
Despends. For a application that does standard IO to user and filesystem
Pascal or C {stdio} works fine. If your doing mixed IO to a A/D card and
present results to the user C looks nicer in code but Pascal handles that
effectively. Both fail when the IO is speed critical, say the nondma
floppy of a 200mhz or slower PC or when we are down in the mud of a Z80 or
8051.
<> Things like messing up the whole programm by one wrong ; or }
<> (something impossible on Assembly) or easyly produce memory
<> leaks (hard to do on other HLL).
Thats simple syntax and rules checking. C allows the naster conversion
of datatypes. I've been burned by longints and ints in pointers to
whatever.
< Depends upon what you're used to. Pascal uses those pesky semicolons as
<well, along with those annoying BEGIN and END statements. Assembly on the
<other hand, is fairly structured and tend to avoid the cascade of errors
<prone to compilers (although Microsoft's MASM is also prone to cascade
<errors).
I don't know if that doesn't exist in all of them. I've been smacked
around gross stupidity in ASM, MACRO, PAL, C, PASCAL, BASIC and yes even
fortran. All of them will go off the end of a structure that is pointed
to and hurt something sacred if you care to.
< You're not using C then. While it's possible to do:
<
< char *pd = destpointer;
< char *ps = srcpointer;
<
< for (i = 0 ; i < sizeof(somestruct) ; i++)
< *pd++ = *ps++;
On some system this produses different code (usually bigger)
on say z80 this will produce discrete code that is a monster.
< That's going about things the hard way. Why not:
<
< memcpy(destpointer,srcpointer,sizeof(somestruct));
Than this. Z80 library has enough smarts to use the LDIR/LDDR instruction
that is fast and efficient.
< Or even:
<
< *destpointer = *srcpointer;
Unpredictable how the compiler will do it even if it works.
This can be horrendus!
< One thing---I can't write Assembly on
linus.slab.conman.org (an AMD 586)
<and have it run on
tweedledum.slab.conman.org (68040). C at least lets me
<write code that will run on both machines.
As would pascal, ADA, basic{maybe}, fortran and heaven help me COBOL.
I find that as the machine gets smaller and resources are less prodigious
the progression is BIG HLL--> smaller language--> ASM, like Pascal, C and
then ASM.
Allison