On 09/22/2013 11:19 AM, Fred Cisin wrote:
As benchmarks go, that is certainly "damning with
faint praise"!
The most common use in the early days for BASCOM was just to hide, or at
least obfuscate, the source code.
Yeah, when I started designing the compiler, there were a couple of
constraints I had to figure out how to live with. One was that it was
to be operated in a multi-user environment. Sure, you could use
something like MP/M (not invented quite yet), but you'd have to generate
PRL executables--and there was no way to share code.
Since in an 8085, there's really no way to conveniently have
self-relocating code (without the trouble of plugging addresses every
time you load it), compiling to native x80 code was pretty much out of
the question. Writing relocatable interpreted code is easy, because you
design it that way.
Now, if you decide to go write a native-code target BASIC, most of your
code is going to look like (load operand pointer(s), call a subroutine),
so at the minimum, 6 bytes--and probably a lot more if type conversion
or subscripting is to be performed.
So how fast could interpreted code run? Very fast--I remembered that
Purdue had PUFFT (ran on 7094s, I think) and other universities had
simillar versions (IIT, I think had IITRAN, servicing multiple TTYs out
of a single foreground partition on a 360/40. The idea being, serve as
many students trying to learn bonehead programming with the least amount
of computing resources. (Does anybody still have Saul Rosen's monograph
on PUFFT? It was an interesting read).
Finally, you know what your target is, and most importantly, what your
customer is going to do with the system. So you write some very smart
string handling routines and optimize internal representations to save
space and give hints to the runtime. You can also add some syntactic
sugar to the lanuguage to make commonly-performed tasks easier.
The OS itself is just a pile of device drivers and a filesystem
(although ISAM was part of it also). Command line support was pretty
much restricted to a program loader.
"Replace all of those with one FAST
machine"??!? NO WAY. Compiling and
assembling was the ONLY place that "performance improvements" were
helpful.
We sometimes forget that in 1965-70, a widely-used "supercomputer" had a
10MHz clock with 1 microsecond core. And people thought it ran like the
wind.
That I'm sitting in front of a 3GHz 6-core 64 bit machine with 8GB of
memory is a testament to the variation of Parkinson's law that says in
effect "Give the people more computatational power and they'll find a
way to piss it all away".
I see in my inbox an advertisement for bargain 4TB disk drives--at one
time, an almost unfathomable amount of data. Yet, individuals have
figured out a way to fill it all up with useless content.
--Chuck