There is one axis along which I concede that things
have advanced
since Multics, which is away from monolithic kernels -
Whether that is an advance or a regression depends on your priorities.
I see good arguments each way.
But the complete structing of the system around a
segmented,
single-level memory system (at least, in terms of the environment the
user sees) is such a fantastic idea that I simply don't understand
why that hasn't become the standard.
Well, what was the largest virtual memory space available on various
machines? On the VAX, it was either one gig or two gig, depending on
whether you count just P0 space or both P0 and P1. When you're mapping
objects of uncertain size, that seems awfully constraining - and,
depending on the page table architecture in use, it can cost a lot of
RAM to get even that much; the VAX, for example, needs eight megs of
RAM to map one gig of space, and that doesn't even account for any
memory used to back any of that space. And, back in the heyday of the
VAX, eight megs was a lot of RAM.
Now that 64-bit address space is becoming common, eight megs of RAM is
ignorably small, and multi-level page tables are common, this looks a
lot less impossible. I've been tempted to build something of the
source, but I never got to use real Multics, and I would probably have
trouble shaking free of the POSIX mindset.
I should dig up some 64-bit machines and try to find enough
documentation to build OSes for them....
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse at
rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B