Honneywell multics? from panels. the inline phots in this message folks -smecc

Charles Anthony charles.unix.pro at gmail.com
Wed Mar 16 12:18:09 CDT 2016


On Wed, Mar 16, 2016 at 6:19 AM, Noel Chiappa <jnc at mercury.lcs.mit.edu>
wrote:

>     > From: Charles Anthony
>
>     > I desperately want to port Multics to a modern architecture
>
> Funny you should mention this! Dave Bridgham and my 'other' project (other
> than the QSIC) is something called iMucs, which is a Multics-like OS for
> Intel x86 machines.
>
> The reason for the x86 machines is that i) they have decent segmentation
> support (well, not the very latest model, which dropped it since nobody was
> using it), and ii) they are common and cheap.
>
> The concept is to pretty much redo Multics, in C, on the x86. The x86's
> segmentation support is adequate, not great. The Multics hardware had all
> those indirect addressing modes that the compiler will have to simulate,
> but
> the machines are now so freakin' fast (see simulated PDP-11's running at
> 100
> times the speed of the fastest real ones - on antique PC's), that shouldn't
> be a huge problem. We did identify some minor lossage (e.g. I think the
> maximum real memory you can support is only a couple of GB), but other than
> that, it's a pretty good match.
>
> The x86 even has rings, and the description sounds like it came out of the
> Multics hardware manual! Although I have to say, I'm not sure rings sare
> what
> I would pick for a protection model - I think something like protection
> domains, with SUID, are better.
>
> (So that e.g. a cross-process callable subsystem with 'private' data could
> have that data marked R/W only to that user ID. In 'pure' Multics, one can
> move the subsystem/data into a lower ring to give it _some_ protection -
> but
> it still has to be marked R/W 'world', albeit only in that lower ring, for
> other processes to be able to call the subsystem.)
>
> It will need specialized compiler support for cross-segment routine calls,
> pointers, etc, but I have a mostly-written C compiler that I did (CNU CC is
> large pile, I wouldn't touch it with a barge-pole) that I can re-purpose.
> And
> we'll start with XV6 to get a running start.
>
> There would be Standard I/O, and hopefully also something of a Unix
> emulation,
> so we could take advantage of a lot of existing software.
>
> Anyway, we've been focused on the QSIC (and for me, getting my 11's
> running),
> but we hope to start on iMucs in the late spring, when Dave heads off to
> Alaska, and QSIC work goes into a hiatus. Getting the compiler finished is
> step 1.
>
>
>     > but there is a profound road-block: the way that Multics does virtual
>     > memory is very, very different, and just does not map onto current
>     > virtual memory architecture.
>
> You refer here, I assume, to the segmentation stuff?
>
>
Slightly at cross purposes here; I was speaking of porting Multics; you are
speaking of writing a Multics like OS. I was opining that I don't think
that porting would work due to Multics reliance on very specific VM
features. I do think that it is entirely possible to write an Multics like
OS on modern hardware.


>     > then you need to extend the instruction set to support the clever
>     > indirect address exceptions that allow directed faults and linkage
>     > offset tables
>
> I think the x86 has more or less what one needs (although, as I say, some
> of
> the more arcane indirect modes would have to be simulated). Although my
> memory of the details of the x86 is a bit old, and I've only ever studied
> the
> details of how Multics did inter-segment stuff (in Organick, which doesn't
> quite correspond to Multics as later actually implemented).
>
>
Organick can be confusing; Multics has some abstractions of the H/W, and
the book isn't always clear about the distinction between the abstractions
and the underlying hardware; also Multics continued to evolve after it was
written.



>     > Then there is subtle issue in the way the Multics does the stack ..
>     > This means that stack addresses, heap address and data addresses are
>     > all in separate address spaces
>
> Well, Multics had (IIRC) 4 segment registers, one for the code, one for the
> stack, one for the linkage segment, and I don't remember what the 4th one
> was
> used for. (I don't think Multics had 'heap' and 'data' segments as somone
> might read those terms; a Multics process would have had, in its address
> space, many segments to which it had R/W access and in which it kept data.)
>

8 pointer registers, which contained segment numbers and word and bit
offsets into the segment.

PL1 calling conventions reseverved certain regsiters for frame pointer, etc.

8 index registers which contained an offset into a segment.

The executable, stack, heap, and static data would all be in seperate
segments.



> But the x86 has that many, and more, so it should be workable, and
> reasonably
> efficient.
>
>     > I think it is possible to move them all into the same space
>
> You wouldn't want to put them all in the same segment - that's the whole
> point of the single-level-store architecture! :-) Or perhaps I'm
> misunderstanding your point, here?
>
> It's been a long time since I look at the x86 segment model, but my
recollection is that segments were mapped into the address space of the
process; that is not how the Multics memory model worked. Each segment is
in it's own address space; any memory reference was, per force, a segment
number and offset.

Again, cross purposes here. I am unconvinced that Multics could be ported
to that architecture; an interesting Multics like operating system should
be possible, with the caveat that some things are going to have be done
differently due to incompatibilities in the memory model.



>     > Also, Multics stacks grow upward -- great for protection against
> buffer
>     > overrun attacks, but a pain in a modern architecture.
>
> Sorry, I don't follow that? Why does the stack growth direction make a
> difference? It's just a convention, isn't it, which direction is 'push'
> and which is 'pop'?
>
> The classic buffer overrun attack overruns a buffer in the stack; since a
downward growing stack puts the stack allocated data below the return
address saved on the stack, it is possible for the overrun to change the
return address to the address of the executable code you just put in the
buffer; when the attacked routine returns, it starts executing your code
for you.

I'm not claiming that upwards growing stacks are better, just that they are
more resistant to one particular attack vector.

-- Charles


More information about the cctech mailing list