On 2024-08-16 8:56 a.m., Peter Corlett via cctalk wrote:
On Thu, Aug 15, 2024 at 01:41:20PM -0600, ben via
cctalk wrote:
[...]
I don't know about the VAX,but my gripe is
the x86 and the 68000 don't
automaticaly promote smaller data types to larger ones. What little
programming I have done was in C never cared about that detail. Now I can
see way it is hard to generate good code in C when all the CPU's are brain
dead in that aspect.
This makes them a perfect match for a brain-dead language. But what does it
even *mean* to "automaticaly promote smaller data types to larger ones"?
That's a rhetorical question, because your answer will probably disagree
with what the C standard actually says :)
I have yet to read a standard, I can never find, or afford the
documentation.
I used Microsoft C for DOS,and had that as standard model as well as
8088 cpu. C for the most part was 16 bit code, with a long here and there.
I use Pelles C for windows version 8, since windows dropped 32 bit
programs.
As a hobby project, I am building a CPU of some size 24 bits
or less. Tried a FPGA card for the last decade, but the internal routing
kept screwing up. Now that we got cheap PCB's from china, I had
2901 Bit slice machine almost working. I can read/write from the front
panel,but programs don't work. Software emulation in C under windows
works only as prototype code. I picked up a cheap 68K board since it has
no MMU and just static ram,I can use that to emulate my hardware design.
Now I need to get a cross assembler and c compiler for the 68K.
When I get the C emulator code working, I can later write a faster
version in assembler. When I started this project any software my I
could need would be written in the small C subset of C, or a revise a 16
bit C compiler source code.
Now, what kind of badly-written code and/or braindead programming language
would go out of its way to be inefficient and use 32-bit arithmetic instead
of the native register width?
The problem is the native register width keeps changing with every cpu.
C was quick and dirty language for the PDP 11, with 16 bit ints. They
never planned UNIX or C or Hardware would change like it did, so one
gets a patched version of C. That reminds me I use gets and have to get
a older version of C.
I'm sure you can "C" where I'm going here. `int` is extremely special
to it.
C really wants to do everything with 32-bit values. Smaller values are
widened, larger values are very grudgingly tolerated. C programmers
habitually use `int` as array indices rather than `size_t`, particularly in
`for` loops. Apparently everything is *still* a VAX. So on 64-bit platforms,
the index needs to be widened before adding to the pointer, and there's so
much terrible C code out there -- as if there is any other kind -- that the
CPUs need hardware mitigations to defend against it.
I still using DOS c compilers,for small C. Int just has one size - 16.
No longs, shorts or other stuff. DOSBOX-X is nice in that I can run dos
programs or windows command line programs.
It's not just modern hardware which is a poor fit
for C: classic hardware is
too. Because of a lot of architectural assumptions in the C model, it is
hard to generate efficient code for the 6502 or Z80, for example.
or any PDP not 10
or 11.
I heard that AT&T had C cpu but it turned out to be a flop.
C main advantage, was a stack for local varables and return addresses
and none of the complex subroutine nesting of ALGOL or PASCAL.
But please, feel free to tell me how C is just fine
and it's the CPUs which
are at fault, even those which are heavily-optimised to run typical C code.
A computer system, CPU , memory, IO , video & mice all have to share the
same pie. If you want one thing to go faster, something else must go
slower. C's model is random access main memory for simple variables
and array data. Register was for a simple pointer or data. Caches may
seem to speed things up, but they can't handle random data
(REAL(I+3,J+3)+REAL(I-3,J-3)+REAL(I+3,J-3)+REAL(I-3,J+3)/4.0)+REAL(I,J)
I will stick to a REAL PDP-8. I know a TAD takes 1.5 us, not 1.7 us 70%
of the time and 1.4 us the other 30%.
Real time OS's and CPU's are out there, how else would my toaster know
when to burn my toast.
Only knowing the over all structure, of a program and hardware
can one optimize it.
Ben.