> From: Charles Anthony
> Slightly at cross purposes here; I was speaking of porting Multics; you
> are speaking of writing a Multics like OS. I was opining that I don't
> think that porting would work due to Multics reliance on very specific
> VM features.
Yes; my un-stated assumption was that the existing Multics code was so tied
to the peculiar Multics hardware (how many instances of "fixed bin(18)" do
you think there are in the Multics source :-) that it would be impossible to
run on any modern hardware except via (as you have so wonderfully done)
emulation. Hence the re-implementation route...
>> I think the x86 has more or less what one needs
Intesting note here: I was just reading Schell's oral history (a
_fascintating_ document), and it turns out he was a consultant to Intel on
the 286 (which architecture the later machines more or less copied exactly,
extending it to 32 bits). So I'm no longer surprised that the x86 has more or
less what one needs! :-)
>> Well, Multics had (IIRC) 4 segment registers, one for the code, one
>> for the stack, one for the linkage segment, and I don't remember
>> what the 4th one was used for.
I pulled down one of my many copies of Organick, and I had misremembered the
details (and of course Organick describes the 645, not the 6180, which was
subtly different). The code has its own set of registers; the PBR/IC (I was
probably thinking of the x86's CS here). Of the four pointer-register pairs
(which are effectively pointers to any segment, i.e. 'far' pointers, in a
sense), two are indeed to the stack and linkage segments, and the others can
be used for other things - one is typically a pointer to subroutine arguments.
> 8 pointer registers ..
> PL1 calling conventions reseverved certain regsiters for frame pointer,
> etc.
Yes, I got the 6180 processor manual, and a bunch of other things, and there
had been significant changes since the 645 (which is the version I was
somewhat familiar with, from Organick). Of the 8 pointer registers in the
6180, I was only able to find the usage of several:
0 - arguments
4 - linkage
6 - stack frame
7 - stack/linkage header
I assume the others (most?/all?) were available for use by the compiler, as
temporaries.
One apparent big change in Multics since Organick was that the stack and
linkage segments had been combined into one (not sure why, as I don't think
having one less segment in the KST made much difference, and it didn't save
any pointer registers); the header in the combined stack/linkage segment
contained pointers to each in the combined segment.
>> You wouldn't want to put them all in the same segment - that's the
>> whole point of the single-level-store architecture! :-) Or perhaps I'm
>> misunderstanding your point, here?
> It's been a long time since I look at the x86 segment model, but my
> recollection is that segments were mapped into the address space of the
> process; that is not how the Multics memory model worked. Each segment
> is in it's own address space; any memory reference was, per force, a
> segment number and offset.
In this last sentence, is that referring to Multics?
If so, that is exactly how the x86 _hardware_ works, but most x86 OS's (in
particular, all the Unix derivatives) don't really use segments, they just
stick everything in a limited number of segments (one for code, one for all
data - maybe one more for the stack, although perhaps they map those two to
the same memory).
> I am unconvinced that Multics could be ported to that architecture
No disagreement there - "fixed bin (18)"!
> an interesting Multics like operating system should be possible
Exactly.
> with he caveat that some things are going to have be done differently
> due to incompatibilities in the memory model.
I'm not so sure - I think you may be thinking that the x86 model is something
other than what it is. It does indeed not have the infinite inter-segment
pointer chaining possible on Multics hardware (where a pointer in memory
points to another pointer which points to another pointer), but other than
that, it does seem to have most of what is needed.
In particular, it has local and global segment tables (indexed by segment
number), and the ability to load pointer registers out of those tables, and
the ability to have most (all?) instructions use particular pointer registers
(including segment selection), e.g. if the linkage segment was pointed to by
the ES register, there is an optional (per-instruction instance) modifier
which causes most (all?) of the normal x86 instruction set to operate on that
segment, instead of the primary data segment (pointed to by the DS register).
Of course, until we get into the details, we can't say positively, but after
reading the manuals, it seemed like it was doable.
Noel
> From: Mattis Lind
> I was thinking of using a M9301 board to get a console emulator and
> some different bootstraps with the 11/05. But can I just put the M9301
> in the slot where the M930 normally goes?
> ...
> M9301 goes into MUD slots. But can it go into the slot where a M930
> normally sits?
I haven't personally looked into doing this in detail, so I can't give a
definitive answer, but your last question here makes alarm bells go off in my
head.
The M930 is designed to go in UNIBUS In/Out slots. These slots do have
different wiring from the A/B MUD slots. (For instance, UNIBUS In/Out slots
have _single_ pins assigned for BG4-7 and NPG, providing 'grant in' or 'grant
out' functionality, depending on if it's an In or Out slot. I don't recall
offhand what function/signal is on those pins in a MUD slot, but I'm pretty
sure it's not a grant!)
I would be fairly astonished if a device intended for a MUD slot would work
in a UNIBUS In/Out slot, and vice versa.
Noel
> From: Mouse
> As for buffer overruns, the point there is that a buffer overrun
> clobbers memory addressed higher than the buffer. If the stack grows
> down, this can overwrite stack frames and/or callers' locals.
Oh, right. Duhhhh! Buffers typically grow upward, no matter which direction
the stack grows. So the two directions for stack growth aren't purely a
convention.
Of course, in Multics, especially with AIM (Access Isolation Mechanism),
stack buffer attacks are much less dangerous. E.g. even without AIM, the
attacker can't load code into the stack, and return to it - generally the
stack segment had execute permission turned off.
And AIM really limits what 'bad' code can get up to. I keep ranting about
it's pointless to expect programmers to write code without security flaws, it
needs to be built in to the low levels of the system (one of Multics' many
lessons - it wasn't _really_ secure until the 6180 moved the ring stuff into
hardware, instead of simulating it in software, as on the 645). And so as
long as we continue to allow Web pages to contain 'active' content (i.e.
code), so that random code from all over the planet gets loaded into our
computers and run, browsers will neve be secure; they need to be run in an
AIM box.
Noel
> From: Mouse
>> simulating a segmented machine on a non-segmented machine, i.e. one
>> with large unidirectional addresses (segmented being a
>> bi-directionally addressed machine) - [...]
> Hm, "unidirectional" and "bidirectional" are terms I'm having trouble
> figuring out the meaning of here. You seem to be using them as,
> effectively, synonyms for "non-segmented" and "segmented"
Yes.
> but I don't see any way in which directionality makes any sense for
> either, so I can only infer I'm missing something.
Imagine a graphic model of the memory in non-segmented, and segmented,
machines.
The former can be modeled as a linear array of memory cells - hence
'uni-directional'. The latter can be modelled by a two-dimensional array -
segment number along one axis, word/byte within segment on the other - hence
'bi-directional'.
Maybe 'uni-axis' or 'bi-axis' would have been a bit more techically correct,
but 'uni-directional' and 'bi-directional' were the first terms that came to
mind - and I didn't think of how they could be confusing (in terms of their
common meanings, when used for flows). Sorry!
Noel
PS: I'm trying to remember all my thoughts about simulating a segmented
memory with a large flat address space. One was that one can prevent pointer
incrementing from 'walking' from one segment into another by leaving a 'guard
band' of a few empty pages between each 'segment'. However, this points out
an issue with such simulation: one cannot easily grow a 'segment' once
another 'segment' has been assigned space above it.
Hi all,
I informed the list when I left the Living Computer Museum, so it seems
appropriate to tell you where I've landed. My new employer was in the news
this week:
http://arstechnica.com/science/2016/03/behind-the-curtain-ars-goes-inside-b…
The second photo is the view from where I ate lunch yesterday. The fun
literally never stops.... living the dream! -- Ian
PS: of course I'm finishing my doctorate - I'm kind of vested in it by now.
:-)
--
Ian S. King, MSIS, MSCS, Ph.D. Candidate
The Information School <http://ischool.uw.edu>
Dissertation: "Why the Conversation Mattered: Constructing a Sociotechnical
Narrative Through a Design Lens
Archivist, Voices From the Rwanda Tribunal <http://tribunalvoices.org>
Value Sensitive Design Research Lab <http://vsdesign.org>
University of Washington
There is an old Vulcan saying: "Only Nixon could go to China."
Does anyone have, or has anyone used, one of these machines?
Specifically the M10/M20 models, with 5.25" disks?
I have a vendor box here with manual and CP/M 2.2 boot disk for the
if800 and I've been trying to make a usable[1] image of the disk,
currently with the Kryoflux and their dtc conversion tool. I sent the
flux reads of the disk off to the KF team and they found it
interesting enough to study, but there is precious little
documentation out there about this machine, much less its disk format.
Looking at the scatter plots of the magnetic flux on the disk, I can
see that it's 40 track and double sided. Converting the dump to a
DS/DD MFM disk image yields many warnings and errors, but also a file
with plenty of discernible strings, so that's at least on the right
track. Images of reads of the two sides done separately show
alternating fragments of the strings of the full read, telling me that
it is a contiguous volume using both sides and not two single-sided
volumes.
One ad I found (mostly in Japanese,) suggests that the if800 drives
are 280K. That's an odd number (to me) for a 5.25" disk.
-j
[1] I have neither the real machine nor an emulator to use them, so
this is mostly just an academic exercise in learning about disk
formats and disk imaging, for now. But AIUI, if the disk's attributes
are known, it should be browsable with a tool like cpmls from the
CPMTools package.
Interestingly(?) both my RK05 and RK05J had an assembly of 3, not 4, 2/3rd
AA NiCd cells for retract, completely decayed of course. I replaced them
with 3 discrete tagged NiMh AA cells (plenty of headroom) soldered and
shrinkwrapped. They work fine, lots of retract force. The clip which holds
them is shaped for only 3 cells so it seems as though there were at least 2
variants. I read the circuit diagram and could see that it would make little
difference whether it was NiCd or NiMh (or for that matter 3 or 4 cells). I
think DEC were a bit overgenerous with the trickle current (though IIRC
NiCds were rather leakier back then).
> From: tony duell <ard at p850ug1.demon.co.uk>
>
> The DEC RK07 (and I assume RK06) used 8 1/2 AA cells in a pack (like
> 2 RK05 retract batteries in series). When I replaced those, I used 2 of
> the cordless telephone batteries (that have been recommended for
> the RK05) in series.
>
> -tony
> From: Charles Anthony
> I desperately want to port Multics to a modern architecture
Funny you should mention this! Dave Bridgham and my 'other' project (other
than the QSIC) is something called iMucs, which is a Multics-like OS for
Intel x86 machines.
The reason for the x86 machines is that i) they have decent segmentation
support (well, not the very latest model, which dropped it since nobody was
using it), and ii) they are common and cheap.
The concept is to pretty much redo Multics, in C, on the x86. The x86's
segmentation support is adequate, not great. The Multics hardware had all
those indirect addressing modes that the compiler will have to simulate, but
the machines are now so freakin' fast (see simulated PDP-11's running at 100
times the speed of the fastest real ones - on antique PC's), that shouldn't
be a huge problem. We did identify some minor lossage (e.g. I think the
maximum real memory you can support is only a couple of GB), but other than
that, it's a pretty good match.
The x86 even has rings, and the description sounds like it came out of the
Multics hardware manual! Although I have to say, I'm not sure rings sare what
I would pick for a protection model - I think something like protection
domains, with SUID, are better.
(So that e.g. a cross-process callable subsystem with 'private' data could
have that data marked R/W only to that user ID. In 'pure' Multics, one can
move the subsystem/data into a lower ring to give it _some_ protection - but
it still has to be marked R/W 'world', albeit only in that lower ring, for
other processes to be able to call the subsystem.)
It will need specialized compiler support for cross-segment routine calls,
pointers, etc, but I have a mostly-written C compiler that I did (CNU CC is
large pile, I wouldn't touch it with a barge-pole) that I can re-purpose. And
we'll start with XV6 to get a running start.
There would be Standard I/O, and hopefully also something of a Unix emulation,
so we could take advantage of a lot of existing software.
Anyway, we've been focused on the QSIC (and for me, getting my 11's running),
but we hope to start on iMucs in the late spring, when Dave heads off to
Alaska, and QSIC work goes into a hiatus. Getting the compiler finished is
step 1.
> but there is a profound road-block: the way that Multics does virtual
> memory is very, very different, and just does not map onto current
> virtual memory architecture.
You refer here, I assume, to the segmentation stuff?
> then you need to extend the instruction set to support the clever
> indirect address exceptions that allow directed faults and linkage
> offset tables
I think the x86 has more or less what one needs (although, as I say, some of
the more arcane indirect modes would have to be simulated). Although my
memory of the details of the x86 is a bit old, and I've only ever studied the
details of how Multics did inter-segment stuff (in Organick, which doesn't
quite correspond to Multics as later actually implemented).
> Then there is subtle issue in the way the Multics does the stack ..
> This means that stack addresses, heap address and data addresses are
> all in separate address spaces
Well, Multics had (IIRC) 4 segment registers, one for the code, one for the
stack, one for the linkage segment, and I don't remember what the 4th one was
used for. (I don't think Multics had 'heap' and 'data' segments as somone
might read those terms; a Multics process would have had, in its address
space, many segments to which it had R/W access and in which it kept data.)
But the x86 has that many, and more, so it should be workable, and reasonably
efficient.
> I think it is possible to move them all into the same space
You wouldn't want to put them all in the same segment - that's the whole
point of the single-level-store architecture! :-) Or perhaps I'm
misunderstanding your point, here?
> Also, Multics stacks grow upward -- great for protection against buffer
> overrun attacks, but a pain in a modern architecture.
Sorry, I don't follow that? Why does the stack growth direction make a
difference? It's just a convention, isn't it, which direction is 'push'
and which is 'pop'?
Noel
> From: Mouse
> Well, what was the largest virtual memory space available on various
> machines?
I have thought, on occasion, about simulating a segmented machine on a
non-segmented machine, i.e. one with large unidirectional addresses (segmented
being a bi-directionally addressed machine) - in fact, I think it was in the
context of the VAX that I went through this mentally.
I don't recall any more the exact outcome of my mental design processes (it
was a _long_ time ago), but I have this vague recollection that it could sort
of work, but that it would be ugly (as in, the compiler would have to simulate
cross-segment pointers, etc - they don't look just like normal pointers as
there has to be provision for binding them when first used, etc).
> Now that 64-bit address space is becoming common
Large unidirectional machines do have one advantage, which is that the
canonical flaw of single-level-storage on a segmented machine is that really
large objects don't fit in a single segment, unless you have ridiculously
large addresses (e.g. 80 bit). When simulating segments on a unidirectional
machine, one can of course make any individual segment as large as one likes
- up to the total size of the unidirctional machine's address space.
Noel