There is one
axis along which I concede that things have advanced
since Multics, which is away from monolithic kernels -
Whether that is an advance or a regression depends on your priorities.
I see good arguments each way.
But the complete structing of the system around a
segmented,
single-level memory system (at least, in terms of the environment the
user sees) is such a fantastic idea that I simply don't understand
why that hasn't become the standard.
Well, what was the largest virtual memory space available on various
machines? On the VAX, it was either one gig or two gig, depending on
whether you count just P0 space or both P0 and P1. When you're mapping
objects of uncertain size, that seems awfully constraining - and,
depending on the page table architecture in use, it can cost a lot of
RAM to get even that much; the VAX, for example, needs eight megs of
RAM to map one gig of space, and that doesn't even account for any
memory used to back any of that space. And, back in the heyday of the
VAX, eight megs was a lot of RAM.
Which is one of the reason why the POWER architecture was done the way
it was done.
How much table space is needed is directly decided by the machine's
amount of RAM (in POWER 1 at least.)