This doesn't sound right to me. I guess there are
so many legends about the
NT design team that we would have to name each person and understand their
contribution. Very early on, when NT betas were being distributed free, and
I got one of them, the lore going around that NT core was canabalized from
the new Mach kernel. Later, I heard that one of the VMS gurus was hired by
MS. I find it very difficult to believe that they would hire a DEC VMS guru
before NT was cast in iron and then "...let him design it." If NT was
based, even loosely, on the Mach kernel, why would they hire a VMS guru?
Why not a Mach kernel guru?
Because it *isn't* based on Mach. Which part of that is hard to understand?
The historical record of this is fairly well documented.
Every operating system today has "...lventy-seven
layers of crap on top of
it..." including UNIX with an X client and then applications or AIX with
SMIT on top of an X client or even VMS.
Most OSes don't have nearly as many layers of crap as NT. I am intimately
familiar with the operation of most of the software on my Linux box, including
the X server and several of the clients, and there just aren't that many
layers.
But at a former job I had to do porting of ATM stuff to NT and 95, and there
are layers upon layers of crap, for no especially good reason. It's just a
result of Microsoft's "Mongolian hordes" programming technique. It was
just
about enough to make me sick.
OK, but isn't the general end result of using the
Mach kernel, a UNIX system?
No.
There are some "UNIX servers" that can be run on top of Mach, but
Mach is not UNIX.
VMS protects itself in the same way. If you are
writing well behaved
applications, you can only make VMS system calls, even if you are writing
drivers.
Yes, but in Win32, you *can't* make OS calls. You can only make Win32
calls. It's an extra layer of mostly useless crap.
I believe it is the fault of the kernel, for without a
kernel with a built
in distributed lock manager, you can't implement true clustering.
IIRC, the NT kernel *has* a built-in distributed lock manager. If not, it
would be easy to add it. The NT kernel is actually small, simple, and
almost elegant. But as I've explained, that's the part that developers
never see. Even driver writers only get to see part of it, but have to
deal with disgusting crap like NDIS for other services.
If you want a
file server to be as robust as possible, it should be used for
nothing but serving files. That was true even in the old days.
That would be news to any large VMS or IBM mainframe shop.
The ones that I've contracted for ran applications on separate machines from
the file servers, because this was (1) more robust, and (2) higher performance.
Usually the heavy-duty system optimizations needed for file servers are
different than those needed for applications.
Certainly you *can* run applications on your file server. And on a good OS,
it will work OK. But that doesn't prove that it is the best way to do
things.