On Sun, 2020-01-05 at 15:21 -0800, Chris Hanson via cctalk wrote:
On Jan 5, 2020, at 2:30 PM, Guy Sotomayor via cctalk
cctalk at classiccmp.org> wrote:
?It did seem for a while that a lot of things were based on Mach,
very few seemed to make it to market. NeXTstep and OSF/1, the
version of which to ship AFAIK was DEC OSF/1 AXP, later Digital
Yes, a lot of things were based on Mach. One OS that you're
is OS X. That is based upon Mach 2.5.
Nope, Mac OS X 10.0 was significantly upgraded and based on Mach 4
and BSD 4.4 content (via FreeBSD among other sources). It was NeXT
that never got beyond Mach 2.5 and BSD 4.2. (I know, distinction
without a difference, but this is an issue of historicity.)
I think only some of the changes from Mach 2.5?3?4 made it into Mac
OS X Server 1.0 (aka Rhapsody) so maybe that?s what you?re
You're probably thinking about the user space. I was working on the
OS X kernel from 2006-2012. I can tell you that most of the kernel
that was still Mach related (most actually got removed...about all that
was left was mach message) was 2.5 based with some enhancements.
didn't get very far, either, did it?
I think that was the original Linux port for PPC.
It was the original Linux port for NuBus PowerPC Macs at least. It
was never really intended to ?get very far? in the first place, it
was more of an experimental system that a few people at Apple threw
together and managed to allow the release of to the public.
MkLinux was interesting for two reasons: It documented the NuBus
PowerMac hardware such that others could port their OSes to it, and
it enabled some direct performance comparisons of things like running
the kernel in a Mach task versus running it colocated with the
microkernel (and thus turning all of its IPCs into function calls).
Turns out running the kernel as an independent Mach task cost 10-15%
overhead, which was significant on a system with a clock under
100MHz. Keep in mind too that this was in the early Linux 2.x days
where Linux ?threads? were implemented via fork()?
At IBM we spent a *significant* amount of time optimizing the
microkernel performance. I recall that on a 90MHz 601 PPC, we got
round-trip RPC below 1 micro-second.
I personally spent a significant amount of time optimizing the
Pentium kernel entry/exit code and optimizing the CPU specific
porition of Mach RPC (it actually took advantage of the x86
I don?t recall if anyone ever did any ?multi-server? experiments with
it like were done at CMU, where the monolithic kernel were broken up
into multiple cooperating tasks by responsibility. It would have been
interesting to see whether the overhead stayed relatively constant,
grew, or shrank, and how division of responsibility affected that.
The IBM microkernel project was *very* multi-server. There was a
version of AIX and OS/2 that ran on top of the IBM microkernel (which
was a heavily modified version of Mach 3.0) were there were quite a few
OS neutral servers (including most device drivers) that were all in
their own server tasks.
TTFN - Guy