Hi,
I am cleaning out some items from my 'treasures' (as my wife calls them!).
I have 3 boards available, without cables, which were originally from an
RA81 circa 1987. I have no way of testing them and I am offering them as-is
where-is, etc.
I want the boards to go to someone who needs the boards to get an RA81 back
to a running state, or, to an established member of the list who can be
entrusted with the boards for restoring an RA81 (Sellem?)
The boards are located in a little town very near Calgary Alberta Canada.
I can arrange for shipping if required. Shipping costs will be required in
advance.
Board 1:
HDA PCB4
5015287B 5415288
Board 2:
5015246-00-C1-P2 5415247
Board 3:
5415251 5015250-00-C1-P2
RA81 SERVO CONTROL
KA865446004 1/0 A 24MAR87
70-19045-01 8654460004
Board 2 has all socketed chips removed. Boards 1 & 3 look to be complete.
If someone has a copy of OpenVMS for an Alpha 300XL, I would be
interested. I am in the process of applying for a DECUS membership and then
a license PAK. I do not have any media. :-(
--barry
On 8/7/06, Zane H. Healy <healyzh at aracnet.com> wrote:
> My point was that VAX/VMS became OpenVMS which runs on both VAX and
> Alpha with v6.0, IIRC. The newest version to run on the VAX is 7.3.
> I've no idea if they're ever going to do one of the 8.x versions for
> the VAX. The roadmap on the HP website might give a clue.
Ah... I missed the semantic distinction. I'm with you now.
-ethan
>Isn't SunOS 4.1.4 rather insecure in this day and age? Not to mention
>harder to obtain.
I would be very careful before putting it on a publically-accessable network, but there's no indication that that will be happening here.
A bit more difficult to come by- yes, but wasn't there a discussion here a bit ago about mating the computer with a representative operating
system when possible to maintain the period unity?
Segin, now you're trolling. Do you suspect that we can't list the shortcomings of VAX just as easily as the shortcomings
of x86? They pulled alot over from the PDP-11 that should have been reimplemented for a 32-bit arch...
On the positive side, what is the most perfect computer architecture + implementation people have come across here?
Tell us why, especially if it's something like PERQ or Acorn RISC/pc that is not common outside of a limited geographic area.
>
>Subject: IDE doesn't suck! (was: Macs and PCs vs workstations
> From: Roger Merchberger <zmerch-cctalk at 30below.com>
> Date: Mon, 07 Aug 2006 10:56:09 -0400
> To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk at classiccmp.org>
>
>Rumor has it that Zane H. Healy may have mentioned these words:
>>At 4:22 PM -0500 8/5/06, Scott Quinn wrote:
>>
>>>On the Macs: PIDE is horrible for doing more than one thing at once. Period.
Sounds more like a driver problem as in only one task at a time.
>>
>>Nothing new there, is there any platform where it isn't?
>
>The IDE interface for my CoCo *Rocks!!!*
>
>=-=
>
>Well, you did ask... ;-)
>
>Laterz,
>Roger "Merch" Merchberger
>
Generally speaking IDE was as good as the hardware hosting it. More often
than not, better. At it's worst it was as good as WD1003 and a Fast MFM
(quantum D540) Then again, early IDE was exactly that!
Z80/10mhz, enhanced CPM(Zrdos) and IDE... Zoom zoom!
Allison
On Aug 7 2006, 15:14, Roy J. Tellason wrote:
> Well, that one is a 10 MHz part while the 68000 I have is I think an
8 MHz
> part, which is a bigger increase than that, but the clock driving
the thing
> would need to be changed too, and I don't think they have exactly
the same
> pinout, though it's been a really long time since I looked at that
material.
They do have the same pinout, and a 68010 is a drop-in replacement for
a 68000. Except they stopped making it ages ago, and it's easier to
get a 68000 than a 68010. The 68010 fixed the problem of not being
able to recover from an interrupted instruction, handles double bus
faults correctly, added a few vectors, made the exception vectors
relocatable, and fixed the problem that getting back to supervisor mode
>from user mode wasn't privileged in the original 68000 (which made a
nonsense of any protection schemes). However, it has some other minor
differences that mean it may not be 100% software compatible with a
68000, for example the revised exception stack frames. I'd have to dig
out some very old course notes to tell you more.
--
Pete Peter Turnbull
Network Manager
University of York
On Sun, 06 Aug 2006 22:41:36 -0400 and Sun, 06 Aug 2006 22:47:23
-0400, Ray Arachelian <ray at arachelian.com> wrote:
> Absolutely. The 68000 is an amazing little chip when you compare
> it to
> the 6800 and 6502's that it evolved itself from. You can sort of
> get a
> feel from this from this journal:
>
> It's worth a read to get a feel of the excitement the 68K caused
> back in
> the day when it was introduced. Some of it is very funny, there's one
> issue where the author compares an Intel FPU to the 68000 running
> software floating point routines, and guess what, the 68000
> actually ran
> FASTER! :-)
>
>
The excitement was for a good part due to the excellent marketing on
the part of Motorola.
> Tony Duell wrote:
>> That very much depends on the CPU. If the CPU is many chips (not a
>> single
>> package) with microcode in PROMs, there's nothing to stop you
>> desoldering
>> said PROMs and reading them out. I've done that, then written a
>> disassembler for the microode instructions and worked out what was
>> going on.
>>
> True, but you don't find CPU's made of discrete logic these days,
> so it
> makes it very difficult to get at the microcode.
In the early '80s we started development of a process controller and
evaluated the 68K as well as the National 32K processor which had
been just released. The NS 32K processor was released with a FP unit
which we required while Moto didn't get around to getting one to
market for another year - the 68K on software just didn't hack it. We
went with the NS 32K and were quite pleased with the beast.
It was a true 32 bit processor witch came in three flavors: 8 bit, 16
bit, and 32 bit external busses - you could move the code from one to
the other with almost no change. The instruction set was extremely
orthogonal and close to those of the VAX sans the three operand
instructions. The only flaw we noted was that the entire machine
state was not saved on interrupt/trap which made a couple of features
useless.
The floating point processor was not a co-processor ala the 68K, but
a external instruction execution module. When the FPU was attached,
the FP instructions became active and you worked between internal
register pairs. IIRC they also made communication processor that
implemented part of the unimplemented instruction set. I played with
my own external processor to implement macro instructions that were
used to optimize execution time.
When Apple was evaluating processors for the Mac, an internal source
told me that NS marketing only made a half*** effort to sell the
part. Unfortunately NS did their normal thing when it came to
processors and shot themselves in the foot (see PACE - both
Studebaker and NS screwed those up) by trying to keep it all to
themselves. Consequently, there was little third party support for
the chip. I believe it to be a better implementation than the 68K.
CRC
>I'm afriad that you don't understand the issues. To produce an accurate
>archive of a tape, one must preserve things such as block sizes, labels,
>and filemarks. There are packages to do this, but tar isn't one of them.
>While tapes are a one-dimensional medium, there's more variation in
>physical structure than encountered on the typical disk.
>
>Cheers,
>Chuck
I'm going to be horrably pragmatic here and assert that, while it is nice to have a image of the tape (using either dd or Tapeutils: http://www.brouhaha.com/~eric/software/tapeutils),
it is more important to have the software on the tape in a useable backup. With the Sun tapes, that means that you can mostly do tar (with the exclusion of the sandalone copy,
munix, and miniroot which would need to be dd'd off) and it is better than having the tapes sit around in the garage and fall apart. Tar/cpio created tapes are nice in that the
dd'd files can be read with tar -f (or cpio), making them useable as-is. inst-format tapes and wbak/rbak tapes don't have this luxury, so I like to do the extraction to files and "rebinding"
into a tardist or disk-file wbak archive so they can be used even if you don't have a tape drive. IOW, both have their uses, but it is infinitely better to have one than to have none.
>I'm afriad that you don't understand the issues. To produce an accurate
>archive of a tape, one must preserve things such as block sizes, labels,
>and filemarks. There are packages to do this, but tar isn't one of them.
>While tapes are a one-dimensional medium, there's more variation in
>physical structure than encountered on the typical disk.
>
>Cheers,
>Chuck
I'm going to be horrably pragmatic here and assert that, while it is nice to have a image of the tape (using either dd or Tapeutils: http://www.brouhaha.com/~eric/software/tapeutils),
it is more important to have the software on the tape in a useable backup. With the Sun tapes, that means that you can mostly do tar (with the exclusion of the sandalone copy,
munix, and miniroot which would need to be dd'd off) and it is better than having the tapes sit around in the garage and fall apart. Tar/cpio created tapes are nice in that the
dd'd files can be read with tar -f (or cpio), making them useable as-is. inst-format tapes and wbak/rbak tapes don't have this luxury, so I like to do the extraction to files and "rebinding"
into a tardist or disk-file wbak archive so they can be used even if you don't have a tape drive. IOW, both have their uses, but it is infinitely better to have one than to have none.
Anyone remember AMC? A coop venture of AMD and Siemens (and maybe another
firm) to push the Z8000 with development tools? I still have a binder full
of documentation for their assembler.
--
Yup. Have a fair amt of material on bitsavers from Philip Freidin from them,
along with some software. Friends back home developed a HVAC control system
using the AMC tools, and when I moved to the Bay Area, I met one of the last
people who was working on software at AMC.