I'm trying to remember--the smallest DOS/360
memory footprint was what, 8K?
16K? Whatever, it wasn't much to support 1 background and 2 foreground
partitions. Lots and lots of supervisior transient "phases" of course,
read in from disk, but still it was impressive what could be done in a
small amount of memory. Almost makes CP/M seem bloated by comparison.
TSS/8 is also a good example.
Now, that was 16K 64-bit words
and I assume that most of the job control and slower I/O tasks would be
handled by other machines, but to me it represented a line drawn in the
sand. I've always thought it a pity that other manufacturers didn't follow
his lead and legislate in hardware the size of the OS kernel.
I do not think that is a good idea. It is the best way for a computer
architect to paint himself into a corner, often with horrible results.
Any long term planning absolutely must include places for expansion,
simply because every computer system gets more complex with time -
bloat, as many call it. Yes, you can set the boundaries of the kernel
in stone, and stick new developments outside the boundaries as
bags-on-the-side, but eventually things will get ugly and drag
everything down. Look what happened to CDC Cybers or DEC PDP-10s - too
much of the architectures hit their limits, with the end results being
ugly things like Cyber 180s and Jupiters (almost).
--
Will