It was thus said that the Great Liam Proven once stated:
This has been waiting for a reply for too long...
As has this ...
On 4 May 2016 at 20:59, Sean Conner <spc at
conman.org> wrote:
Part of that was the MMU-less 68000. It certainly made message passing
cheap (since you could just send a pointer and avoid copying the message)
Well, yes. I know several Amiga fans who refer to classic AmigaOS as
being a de-facto microkernel implementation, but ISTM that that is
overly simplistic. The point of microkernels, ISTM, is that the
different elements of an OS are in different processes, isolated by
memory management, and communicate over defined interfaces to work
together to provide the functionality of a conventional monolithic
kernel.
Nope, memory management is not a requirement for a microkernel. It's a
"nice to have" but not "fundamental to implementation." Just as you
can have
a preemptive kernel on a CPU without memory management (any 68000 based
system) or user/kernel level instruction split (any 8-bit CPU).
If they're all in the same memory space, then even
if they're
functionally separate, they can communicate through shared memory --
While the Amiga may have "cheated" by passing a reference to the message
instead of copying it, conceptually, it was passing a message (for all the
user knows, the message *could* be copied before being sent). I still
consider AmigaOS as a message based operating system.
Also, QNX was first written for the 8088, a machine not known for having a
memory management unit, nor supervisor mode instructions.
I think what
made the Amiga so fast (even with a 7.1MHz CPU)
was the specialized hardware. You pretty much used the MC68000 to script
the hardware.
That seems a bit harsh! :-)
Not in light of this blog article:
http://prog21.dadgum.com/173.html
While I might not fully agree with his views, he does make some compelling
arguments and makes me think.
But Curtis Yarvin is a strange person, and at least
via his
pseudonymous mouthpiece Mencius Moldbug, has some unpalatable views.
You are, I presume, aware of the controversy over his appearance at
LambdaConf this year?
Yes I am. My view: no one is forcing you to attend his talk. And if no
one attends his talks, the liklihood of him appearing again (or at another
conference) goes down. What is wrong with these people?
Nice in
theory. Glacial performance in practice.
Everything was glacial once.
We've had 4 decades of very well-funded R&D aimed at producing faster
C machines. Oddly, x86 has remained ahead of the pack and most of the
RISC families ended up sidelined, except ARM. Funny how things turn
out.
The Wintel monopoly of the desktop flooded Intel with enough money to keep
the x86 line going. Given enough money, even pigs can fly.
Internally, the x86 lines is RISC. The legacy instructions are read in
and translated into an internal machine lanuage that is more RISC like than
CISC. All sorts of crazy things going on inside that CPU architecture.
The Lisp
machines had tagged memory to help with the garbage collection
and avoid wasting tons of memory. Yeah, it also had CPU instructions like
CAR and CDR (even the IBM 704 had those [4]). Even the VAX nad QUEUE
instructions to add and remove items from a linked list. I think it's
really the tagged memory that made the Lisp machines special.
We have 64-bit machines now. GPUs are wider still. I think we could
afford a few tag bits.
I personally wouldn't mind a few user bits per byte myself. I'm not sure
we'll ever see such a system.
Of course we
need
to burn the disc packs.
I don't understand this.
It's in reference to Alan Kay saying "burn the disc packs" with respect
to
Smalltalk (which I was told is a mistake on my part, but then everybody
failed to read Alan's mind about "object oriented" programming and he's
still pissed off about that, so misunderstanding him seems to be par for
course).
It's also an oblique reference to Charles Moore, who has gone on record as
saying the ANSI Forth Standard is a mistake that no one should use---in
fact, he's gone as far as saying that *any* "standard" Forth misses the
point and that if you want Forth, write it your damn self!
If you mean that, in order to get to saner, more
productive, more
powerful computer architectures, we need to throw away much of what's
been built and go right back to building new foundations, then yes, I
fear so.
Careful. Read up on the Intel 432, a state of the art CPU in 1981.
Yes, tear down the foundations and rebuild, but top of
the new
replacement, much existing code could, in principle, be retained and
re-used.
And Real Soon Now, we'll all be running point-to-point connections on
IPv6 ...
-spc