On Thu, 28 Mar 2002, Hans Franke wrote:
Now, here we are talking about two things:
#1 is is the platform independant coding of an application, which
is accomplished by using the same structure across all platforms.
They did a good
job.
#2 is about different mass storage hardware. and here
of course
an operating system could do next to nothing.
Their making of such a claim was
extremely reprehensible.
They claimed to have a "Universal disk format" that would work with ALL
computers. That was NEVER true and was false advertising.
But it WAS possible, just through attempts at standardization to reduce
the format count to a handful (maybe a dozen?), instead of the thousands
of CP/M formats and the dozens of MS-DOS formats. (see the list at:
http://www.xenosoft.com/fmts.html
which is just the soft-sectored double density ones)
Because of #2 you rarely could use a disk from one
system on another
. . .
(In fact I even remember the same situation for MS-DOS - at least
The number of formats could have easily been limited to one for each type
of hardware.
I had a brief discussion of that with Gary Kildall.
Me: What is the standard format for 5.25" double density?
Gary: 8" single density.
He held to his convictions of NOT diluting the standard by having a
secondary (5.25") standard format.
The basic
concept behind it was the distribution of software NOT as a
binary, but as a platform independent "P-code" that was run on a
"P-code" interpreter.
Well, the P code IS the binary format of the
programms. Just as
the so called Bytecode is the binary format for Java.
Most people think of "binary" as meaning the NATIVE executable form of a
program.
By having the input to the P-system interpreter in a non human readable
form, it added the additional benefit of making it more difficult to make
any changes without going back through the compiler process.
> It combined all of the convenience of software
development of a compiler
> with the speed of execution of an interpreter.
Well, the speed difference wasn't that big between
the
result of 'real' Compilers (fortran for example) and
the UCSD-P version of Fortran. As so often it depended
on your kind of application.
Some of the P-Code engines were VERY well written. V
some compilers, ...
The PC-DOS Fortran compiler 1.0 (written by MICROS~1, sold by IBM) when
running a sieve of Erastothanes (a common benchmark in those days) was
SLOWER than the MICROS~1 ROM BASIC interpreter!
Bob Wallace (PC-Write), who wrote the MICROS~1 Pascal compiler, advised to
NEVER EVER use the supplied run-time library.
BTW: Total different thing - has anybody erver tried
to do a Java to
UCSD P-Code compiler ? Could be some fun :)
Or a P-code interpreter written in
java?
Or java virtual machine written in P-Code?
Two more fun parts of the UCSD P-System:
It would not/could not save a file in non-contiguous space. Thus you
could have hundreds of K of free space, but not necessarily enough
contiguous to save a file. Periodically you had to run their
"CRUNCH" program to defragment. Hope that nothing goes wrong DURING that
process!
When storing a 16 bit integer (such as starting block number in a
directory entry) in two bytes, do you put the MSB (most significant byte)
or LSB (least significant byte first? That was NOT standardized on
P-System disk formats!! Thus, Intel based machines were LSB first;
Motorola based machines were MSB first.
--
Grumpy Ol' Fred cisin(a)xenosoft.com