This has been waiting for a reply for too long...
On 4 May 2016 at 20:59, Sean Conner <spc at conman.org> wrote:
It was thus said that the Great Liam Proven once
stated:
On 29 April 2016 at 21:06, Sean Conner <spc at
conman.org> wrote:
It was thus said that the Great Liam Proven once
stated:
I read that and it doesn't really seem that
CAOS would have been much
better than what actually came out. Okay, the potentially better resource
tracking would be nice, but that's about it really.
The story of ARX, the unfinished Acorn OS in Modula-2 for the
then-prototype Archimedes, is similar.
No, it probably wouldn't have been all that radical.
I wonder how much of Amiga OS' famed performance, compactness, etc.
was a direct result of its adaptation to the MMU-less 68000, and thus
could never have been implemented in a way that could have been made
more robust on later chips such as the 68030?
Part of that was the MMU-less 68000. It certainly made message passing
cheap (since you could just send a pointer and avoid copying the message)
Well, yes. I know several Amiga fans who refer to classic AmigaOS as
being a de-facto microkernel implementation, but ISTM that that is
overly simplistic. The point of microkernels, ISTM, is that the
different elements of an OS are in different processes, isolated by
memory management, and communicate over defined interfaces to work
together to provide the functionality of a conventional monolithic
kernel.
My reading suggests that one of the biggest problems with this is performance.
If they're all in the same memory space, then even if they're
functionally separate, they can communicate through shared memory --
meaning that although it might /look/ superficially like a
microkernel, the elements are not in fact isolated from one another,
so practically, pragmatically, it's not a microkernel. If there is no
separation between the cooperating processes, then it's just a
question of design aesthetics, rather than it being a microkernel.
but QNX shows that even with copying, you can still
have a fast operating
system [1].
Indeed. And of course at one point it looked like QNX would be the
basis for the next-gen Amiga OS:
http://www.amigahistory.plus.com/qnxanno.html
http://www.theregister.co.uk/1999/07/09/qnx_developer_pleas_for_amiga/
http://www.trollaxor.com/2005/06/how-qnx-failed-amiga.html
I think what made the Amiga so fast (even with a
7.1MHz CPU)
was the specialized hardware. You pretty much used the MC68000 to script
the hardware.
That seems a bit harsh! :-)
I spent some hours on the Urbit site. Between the
obscure writing,
entirely new jargon and the "we're going to change the world" attitude,
it very much feels like the Xanadu Project.
I am not sure I'm the person to try to summarise it.
I've nicked my own effort from my tech blog:
I've not tried Urbit. (Yet.)
But my impression is this:
It's not obfuscatory for the hell of it. It is, yes, but for a valid
reason: that he doesn't want to waste time explaining or supporting
it. It's hard because you need to be v v bright to fathom it;
obscurity is a user filter.
Red flag #1.
Point, yes.
But Curtis Yarvin is a strange person, and at least via his
pseudonymous mouthpiece Mencius Moldbug, has some unpalatable views.
You are, I presume, aware of the controversy over his appearance at
LambdaConf this year?
E.g.
http://www.inc.com/tess-townsend/why-it-matters-that-an-obscure-programming…
He claims NOT
to be a Lisp type, not to have known anything much about
the language or LispMs, & to have re-invented some of the underlying
ideas independently. I'm not sure I believe this.
My view of it from a technical perspective is this. (This may sound
over-dramatic.)
We are so mired in the C world that modern CPUs are essentially C
machines. The conceptual model of C, of essentially all compilers, OSes,
imperative languages, &c. is a flawed one -- it is too simple an
abstraction. Q.v.
http://www.loper-os.org/?p=55
Ah yes, Stanislav. Yet anohther person who goes on and on about how bad
things are and makes oblique references to a better way without ever going
into detail and expecting everyone to read his mind (yes, I don't have a
high opinion of him either).
I gather.
He did, at one point, express fairly clearly what he wanted. The
problem is that he then changed his mind, went off on various tangents
concerning designing his own CPU, and seems to have got mired in that.
Reminds me of Charles Babbage and his failure to produce a finalised
Difference Engine, because at first he got distracted by tweaking it,
and later distracted by the Analytical Engine.
If he'd focussed on delivering the DE, it would have paid for the AE,
and the world would be a profoundly different place today.
And you do realize that Stanislav does not think
highly of Urbit (he
considers Yarvin as being deluded [2]).
I do.
Honestly, I suspect some of this is down to NIH syndrome, some to
jealousy, and some to the fact that Yarvin has an explicit agenda
which Stanislav does not share.
Instead of
bytes & blocks of them, the basic unit is the list.
Operations are defined in terms of lists, not bytes. You define a few
very simple operations & that's all you need.
Nice in theory. Glacial performance in practice.
Everything was glacial once.
We've had 4 decades of very well-funded R&D aimed at producing faster
C machines. Oddly, x86 has remained ahead of the pack and most of the
RISC families ended up sidelined, except ARM. Funny how things turn
out.
3.5 decades of investment in x86 has produced some amazingly fast,
capable chips.
If we'd had 4 decades of effort aimed at fast Lisp Machines, I think
we'd have them.
Kalman Reti has an interesting presentation on Lisp Machines on
Youtube -- if you've not seen it, it's linked from this relevant
discussion:
https://news.ycombinator.com/item?id=10255276
As he puts it, when Symbolics assembled its own single-chip processor,
it achieved a task of such complexity that the only comparable effort
he was aware of was DEC's development of the MicroVAX CPU. And
Symbolics achieved this with a single-digit-sized development team,
whereas DEC had a 3-digit sized team and took several years to do it.
The way LispMs
worked, AIUI, is that the machine language wasn't Lisp,
it was something far simpler, but designed to map onto Lisp concepts.
I have been told that modern CPU design & optimisations & so on map
really poorly onto this set of primitives. That LispM CPUs were stack
machines, but modern processors are register machines. I am not
competent to judge the truth of this.
The Lisp machines had tagged memory to help with the garbage collection
and avoid wasting tons of memory. Yeah, it also had CPU instructions like
CAR and CDR (even the IBM 704 had those [4]). Even the VAX nad QUEUE
instructions to add and remove items from a linked list. I think it's
really the tagged memory that made the Lisp machines special.
We have 64-bit machines now. GPUs are wider still. I think we could
afford a few tag bits.
If
Yarvin's claims are to be believed, he has done 2 intertwined things:
[1] Experimentally or theoretically worked out something akin to these
primitives.
[2] Found or worked out a way to map them onto modern CPUs.
List comprehension I believe.
This is his "machine code". Something
that is not directly connected
or associated with modern CPUs' machine languages. He has built
something OTHER but defined his own odd language to describe it &
implement it. He has DELIBERATELY made it unlike anything else so you
don't bring across preconceptions & mental impurities. You need to
start over.
Eh. I see that, and raise you a purely functional (as in---pure
functions, no data) implementation of FizzBuzz:
https://codon.com/programming-with-nothing
But, as far as I can judge, the design is sane,
clean, & I am taking
it that he has reasons for the weirdness. I don't think it's
gratuitous.
We'll have to agree to disagree on this point. I think he's being
intentionally obtuse to appear profound.
It could be. I am not sure that I am competent to judge.
But I have in my time talked to a few truly brilliant minds, and
often, I find that they are obscure and hard to follow simply because
their minds move in leaps that lesser minds such as my own cannot
follow.
I have read comments that whereas Yarvin's original description of
Urbit was so full of his own clever wordage that it was almost
impossible to follow, but now, as others work on the wiki pages, it's
more human-readable, if less inspiring.
So what on a
LispM was the machine language, in Urbit, is Nock. It's a
whole new machine language layer, placed on top of an existing OS
stack, so I'm not surprised if it's horrendously inefficient.
Compare with Ternac, a trinary computer implemented as a simulation on
a binary machine. It's that big a change.
https://en.wikipedia.org/wiki/Ternac
Then, on top of this layer, he's built a new type of OS. This seems to
have conceptual & architectural analogies with LispM OSes such as
Genera. Only Yarvin claims not to be a Lisper, so he's re-invented
that wheel. That is Hoon.
But he has an Agenda.
Popehat explained it well here:
https://popehat.com/2013/12/06/nock-hoon-etc-for-non-vulcans-why-urbit-matt…
Yeah, I read that. Urbit is a functional underpinning.
Well, yes, it is, but I personally am not terribly interested in what
it underpins. I'm just interested because, coming from a very
different basis, and working towards totally different goals, he seems
to have come to the same conclusions that I had -- and Stanislav
Datskovskiy has, among a very few others.
Such a convergence of ideas suggests to me that these powerful ideas
are in fact possibly correct.
Of course we need
to burn the disc packs.
I don't understand this.
If you mean that, in order to get to saner, more productive, more
powerful computer architectures, we need to throw away much of what's
been built and go right back to building new foundations, then yes, I
fear so.
But saying that, a lot of today's productive code is in very
high-level languages such as Clojure, Python, Ruby and so on. I see no
reason that these could not be re-implemented on some hypothetical
modern Lisp-based OS, just as OpenGenera could run C, Fortran and so
on.
Yes, tear down the foundations and rebuild, but top of the new
replacement, much existing code could, in principle, be retained and
re-used.
I would be
interested in an effort to layer a bare-metal-up LispM-type
layer on top of x86, ARM, &c. But Yarvin isn't here for the sheer
techno-wanking. Oh no. He wants to reinvent the world, via the medium
of encryption, digital currencies, &c. So he has a whole other layer
on top of Urbit, which is the REASON for Urbit -- a secure, P2P,
encrypted, next-gen computer system which happens to run on existing
machines & over the existing Internet, because that's the available
infrastructure, & whereas it's a horrid mess, it's what is there. You
can't ignore it, you can't achieve these grandiose goals within it,
so, you just layer your new stuff over the top.
So does Stanislav.
I don't think he does.
AFAICT from extensive reading of
loper-os.org, originally,
Dastovskiy's intent was to built a LispM-type OS on x86. Then he got
distracted by the potential of things like FPGAs and never came back.
Tragic, really.
And so did Far? Rideau with TUNES.
Ah, yes, I'd forgotten about that. Thanks for the reminder.
But at some point, the electrons need to meet the
silicon or else it's
just talk. Lots and lots of talk. Obsfucated talk at that.
Well, yes, up to a point. But there are signs that things are actually
getting done.
Besides Urbit, there are:
https://common-lisp.net/project/movitz/
http://interim.mntmn.com/
https://github.com/froggey/Mezzano
On which note, this discussion is interesting for the contribution
from the ex-Apple type:
https://www.reddit.com/r/lisp/comments/10gr05/lisp_based_operating_system_q…
--
Liam Proven ? Profile:
http://lproven.livejournal.com/profile
Email: lproven at cix.co.uk ? GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lproven at
hotmail.com ? Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) ? +420 702 829 053 (?R)