Anyone happen to know:
a) The correct voltage* for the supply to the HDA (ST506),
b) Whether the comms cable is wired straight through? (This for an Apple ///
Profile - I'm not sure if the Lisa variant is different in this regard). I'm
pretty sure I just used a straight-through cable many years ago, but
confirmation would be nice.
* I'm getting +16VDC on the one here (with the logic supply given a suitable
dummy load so that it produces a regulated +5VDC). I don't particularly want
to toast the HDA if that's supposed to be something like 12VDC... possible
that the supplies are essentially separate I suppose, and the HDA output needs
its own load before it'll regulate properly.
I thought there was a service manual for the Profile (Lisa or /// I'm not
sure) floating around a decade or so ago - doesn't seem to show up under
Google though and it's not on bitsavers :-(
cheers
Jules
On 05/10/2007, Zane H. Healy <healyzh at aracnet.com> wrote:
> At 8:41 PM -0700 10/4/07, Cameron Kaiser wrote:
> >This is semi-OT, but if you use an OS X capable Power Mac to run legacy Mac
> >software, 10.5 apparently no longer supports Classic even on PowerPC machines.
> >
> > http://www.lowendmac.com/mail/mb07/0716.html#2
> >
> >This is a shame, since a lot of early Mac software will surprisingly still
> >run on my G5 running Tiger. In fact, I use an old System 6-era version of
> >Caere OmniPage for simple OCR tasks since it's so unbelievably fast by
> >comparison.
>
> <bad word><bad word><bad word><bad word><bad word><bad word><bad
> word><bad word><bad word><bad word><bad word><bad word><bad word><bad
> word><bad word><bad word><bad word><bad word><bad word><bad word><bad
> word>
>
> OK, that <bad word> STINKS! Just the night before last I had to fire
> up ClarisDraw as even though I own copies of more modern drawing
> app's, nothing works as well. First in 10.4 they dropped support for
> classic AppleTalk breaking things for me, now this. Well, we may
> just stay on 10.4.x for the next few years. As a result of the
> Appletalk issue I didn't upgrade to 10.4 till earlier this year, even
> though I bought it the day it came out.
Yep. Although to be fair, I have few Macs so old they can't run MacOS
8.1 and it networks happily with Tiger boxes over TCP/IP.
But yes, I entirely agree.
OTOH, Apple's market now is x86 boxes and its competition is Windows.
A Macintel can dual-boot to Windows or run it in a fairly seamless VM.
With the best will in the world, to most people - including, I must
reluctantly admit, me - that is (or would be) vastly more use than
running MacOS 9 in a VM!
I happily run Linux, but if money were no object, I'd be on OS X on a
Mac Pro, with Wine for a few legacy Windows apps - or possibly
Parallels Desktop, as it's cheap and it runs Windows seamlessly - i.e.
Windows apps' windows merged with OS X apps on the same shared
desktop. On a quad-core or octo-core machine with 4G of RAM, I could
afford a copy of W2K in a VM. I'd never notice the load, I'm sure. The
thing that saddens me is that if you do this, you suddenly have to do
all that tired old dancing around with legions of MS updates and
antivirus and antispyware and so on in Windows on your nice clean safe
Mac.
--
Liam Proven ? Profile: http://www.linkedin.com/in/liamproven
Email: lproven at cix.co.uk ? GMail/GoogleTalk/Orkut: lproven at gmail.com
Tel: +44 20-8685-0498 ? Cell: +44 7939-087884 ? Fax: + 44 870-9151419
AOL/AIM/iChat: liamproven at aol.com ? MSN/Messenger: lproven at hotmail.com
Yahoo: liamproven at yahoo.co.uk ? Skype: liamproven ? ICQ: 73187508
I have a specific question, but a general concern about how long a
backup can still be read.
I realize that all of the hardware and software is less than 10 years old,
but the software files being backed up are often 30 years old.
Environment: Pentium III (running W98SE) or Pentium 4
(running WXP) and using GHOST 7.0 as the backup software
with C: hard drive under FAT32 file structure. An image
file backup made of the C: drive is eventually written to
a DVD and within a time period of between one second to one
day, another copy of the DVD file is made back to a spare
hard drive thereby proving that the DVD can be read. In
addition, the MD5 value of the image file was kept and
written to the DVD and compared with the MD5 value of the
file copied from the DVD (since it takes less time to
produce the MD5 value from a hard disk file than from
an identical DVD file).
Question: How long a period of time should I wait to be
sure that infant mortality of the image file on the DVD
is no longer a factor? Is one second sufficient? i.e.
at present as soon as the DVD is finished being burned
and the DVD is ejected, I copy all of the files back to
a vacant partition on the hard drive.
Question: I am using Fujifilm DVD-R 16X 4.7 GB blanks
with an old Pioneer 105 DVD burner. I have not experienced
any problems over the past 5 years and I plug the power
into the DVD burner ONLY when it is being used - which
is 3 or 4 times a year. Should I be using a second
DVD drive which only reads a DVD to check on the files
which have been burned to the DVD blank? How often should
I be reading the files on the DVD to be sure that the
files can still be read? Is 5 years the length of time
before I should duplicate the files on an old DVD? Or
perhaps sooner or perhaps longer? Since I make and keep
a monthly backup image file of the C: drive (3 DVDs a year
with 4 monthly backup files of 1 GB each), loosing one
backup image file would probably not be critical.
Question: The dual layer DVD drives and blanks (which
hold more than 8 GB) seem to be more than double the cost.
Are they just as reliable at this point as the single
layer drives and blanks which hold only 4.7 GB?
Sincerely yours,
Jerome Fine
--
If you attempted to send a reply and the original e-mail
address has been discontinued due a high volume of junk
e-mail, then the semi-permanent e-mail address can be
obtained by replacing the four characters preceding the
'at' with the four digits of the current year.
Am I correct that the only "new" DSDD 5.25" floppies would be "New Old Stock".
Zane
--
| Zane H. Healy | UNIX Systems Administrator |
| healyzh at aracnet.com (primary) | OpenVMS Enthusiast |
| MONK::HEALYZH (DECnet) | Classic Computer Collector |
+----------------------------------+----------------------------+
| Empire of the Petal Throne and Traveller Role Playing, |
| PDP-10 Emulation and Zane's Computer Museum. |
| http://www.aracnet.com/~healyzh/ |
fre 2007-10-05 klockan 12:00 -0500 skrev "Jerome H. Fine"
<jhfinedp3k at compsys.to>:
> Johnny, I believe that your comments are very clear and
> they address many of the aspects which concern the way
> in which MSCP handles read / write requests in both small
> systems (single user systems like RT-11 and even TSX-PLUS
> since the device driver still handles one request at a time)
> and large systems (such as RSX-11 and especially VMS).
Thank you. And yes, there might be a big difference between systems like
RT-11, and larger ones. I don't know enough of the innards of RT-11
device drivers to tell how it is doing, nor how programs might utlize
the driver.
> (NOTE that all of the following comments are with respect
> to running programs on a 750 MHz Pentium III with 768 MB
> of RAM using W98SE as the operating system, ATA 100 disk
> drives of 160 GB and Ersatz-11 as the application program
> running a mapped RT-11 monitor, RT11XM. While I have very
> good reason to believe that the same relative results will
> be obtained on a Pentium 4 under WXP, again using Ersatz-11
> running RT-11, I have done almost no testing at this time.
> OBVIOUSLY, comparison with real DEC hardware of a PDP-11
> and a VAX can only be done on a relative basis since HD:
> exists ONLY under Ersatz-11. In addition, since the speed
> of disk I/O on the Pentium III (even more so on a Pentium 4)
> is so much faster (more than 100 times) than the transfer
> rate on a SCSI Qbus or Unibus, the comparison could be very
> misleading since CPU time vs I/O transfer time might become
> much more significant. For just one example, when the BINCOM
> program that runs on a real DEC PDP-11/73 is used to compare
> 2 RT-11 partitions of 32 MB on 2 different ESDI Hitachi hard
> drives (under MSCP emulation with an RQD11-EC controller),
> it takes about the same time (about 240 seconds) to copy
> an RT-11 partition and to compare those same 2 partitions.
> Under Ersatz-11, the copy time is about 2 1/4 seconds and the
> BINCOM time is about 6 1/2 seconds using MSCP device drivers.
> When the HD: device driver is used under Ersatz-11, the times
> are about 1 second for the copy and about 6 seconds for the
> BINCOM - I have not bothered to figure out why the reduction
> is only 1/2 second instead of 1 1/4 seconds.)
There is a big problem with using E11 here, since it queues and
optimizes disk I/O as well, and so does the underlying OS also in the
end. So it is tricky to do much evaluation of the controllers as such.
You basically see what is best under E11.
> However, I believe that my comments on the efficiency of
> using the MSCP device driver under RT-11 vs the efficiency
> of using the HD: device driver probably need to be analysed
> much more closely. The other aspect of the analysis that
> is missing is the efficiency with with Ersatz-11 implements
> the MSCP emulation as opposed to the HD: "emulation". It
> is unlikely, but possible, that Ersatz-11 has much higher
> overhead for MSCP since the interface is so much more
> "intelligent" than the HD: interface only needs to transfer
> the data to the user buffer based on the IOPAGE register
> values.
Analysis is always a good thing. And yes, the implementation of the
respective emulation in E11 plays a big part.
> A bit more information may help.
>
> (a) The HD: device driver can be used BOTH with and without
> interrupts being active after the I/O request is issued.
> It makes no difference under W98SE since the I/O request
> is ALWAYS complete before even one PDP-11 instruction is
> executed. This result also applies to the MSCP device
> driver which I could modify to see if it might make a
> difference in efficiency. However, when I attempt to
> compare the copy of a 32 MB RT-11 partition with HD:,
> the time difference between using interrupts and not
> using interrupts is so negligible that it is almost
> impossible to measure the total time difference to copy
> the 32 MB RT-11 partition using the available PDP-11
> clock which measures in 1/60 of a second. Since there
> are 60 ticks in a second, the accuracy is better than
> 2% over 1 second which seems adequate to determine on
> an overall basis if using interrupts vs no interrupts
> makes a significant difference. Obviously if there is
> no significant time difference at the 2% level (of one
> time tick of 1/60 of a second), then avoiding the extra
> RT-11 code to handle the interrupt does not play a
> major role in the increased efficiency of HD: vs MSCP.
> I conclude that would be the same for MSCP as well.
An interrupt handler that took anywhere near a fraction of 1/60 of a
second is so broken it should be shot.
Basically, you cannot measure anything with a clock of that low
precision.
Also, you need to check for I/O completion before doing the next
operation. If you skip that, you will loose. So the question then is: is
it acceptable to be in a tight loop waiting for I/O to complete, or do
you want the machine to be able to do something else meanwhile?
Let us instead look at this from a theoretical point of view.
With the HD: driver, you need to somehow make sure that the previous
operation have completed before you start the next one. This must all be
done in PDP-11 code. You have the choice of either doing it polled,
using a tight loop, or having an interrupt when the device is ready.
Now, having a tight loop will most likely be better, but then your
machine will do nothing else while it is waiting for the previous
operation to complete.
So most likely you will want to use interrupts.
But no matter, we're talking about executing PDP-11 instructions the
whole time here. Either to polled loop, followed by setting up the
registers for the next I/O request, or having an interrupt handler entry
doing the same setup after some sagister saving and so on.
As such, they take a damn much longer than doing native instructions on
the host CPU. This is important.
The host CPU in the HD: case is the machine running the E11 emulator,
while for the MSCP case is either the machine running E11, or the local
CPU on the controller card.
My point here then is: once the previous operation have completed, the
HD: controller must run through some PDP-11 code before the next I/O
operation can start.
With MSCP, the host CPU needs to run though some code before the next
I/O operation can start, while the PDP-11 isn't burdened at all in this
phase.
Obviously, the MSCP case is better.
But this is only true, if you queue more than one I/O reuqest to the
MSCP controller. If you don't take advantage of this feature in MSCP,
then your MSCP controller will be the same as the HD: controller, but
with more overhead, since there is more bits to fiddle on the PDP-11
before a new I/O request can start using MSCP. Basically stop and go.
Not efficient at all.
So it's a question of if the device driver takes advantage of this or
not.
And this might be something that RT-11 don't do.
Oh, and no, disk operations under E11 aren't so fast that no
instructions will be executed before the operation completes. However,
with disk caching and tricks inside E11, a few operations might appear
to go that fast, before reality catches up with you.
Disk I/O still takes on the order of milliseconds to complete. Guess how
long one PDP-11 instruction in E11 takes?
If anything, computer speed have advanced much more that disk speeds, so
that even with emulated computers, we now manage to do a *lot* while
waiting for disks.
And already with the trusty old real hardware, we sat around waiting for
disks long times...
> (b) The other aspect is the ability of MSCP to order
> and internally queue I/O requests based on the most
> efficient order for them to be performed, probably
> when there are many requests outstanding and the
> required head movement can be minimized by choosing
> the order in which to execute the requests - which
> thereby increases overall I/O throughput. If I can
> make a suggestion, I respectfully ask what the interface
> between the device driver and the controller (or host
> adapter in the case of SCSI for MSCP - note that ESDI
> controllers are also MSCP) has to do with efficient
> internal queuing of I/O requests. Perhaps my viewpoint
> based on RT-11 is distorted (or TSX-PLUS for that matter
> which uses almost the identical code as RT-11 as far as
> I am aware), but I ask the question. It seems to me
> that a simple (dumb and efficient) interface such as
> HD: is only the final step in instructing the "controller"
> to perform the disk I/O whereas the actual "intelligent"
> aspect is probably going to be in the device driver
> of the respective operating system such as RT-11, TSX-PLUS,
> RSX-11 or VMS. Obviously the "intelligent" portion can
> also be in the actual controller or host adapter, but based
> on my VERY limited understanding of MSCP implementation
> by both DEC and 3rd party MSCP controller and host adapter
> manufactures for both the Qbus and Unibus, all of the
> "intelligence" of internal queuing of I/O requests for
> the above 4 example operating systems is performed in
> the device driver, if anywhere.
There are obviously several reasons why this needs to be in the
controller.
If we go back to the first point I discussed above, about MSCP being
more efficient if we queue several operations at once, without having to
wait for each operation to complete before queueing the next one, then
you must also do queue optimization inside the controller, since
obviously you cannot easily synconize and reorder operations inside the
device driver once you have queued them to the controller. That would
require you to withdraw that I/O request, so that you can insert another
and then reinsert the revoked one again for you to get the correct
ordering.
Now, if you don't want the efficiency of being able to queue new
operations immediately, but instead only issue them once the previous
one is finished, then you can easily also do queue optimizations in the
device driver.
However, this all also is related to another aspect of MSCP that I
mentioned: bad block replacement. Since the controller does this without
the involvement of the driver, you cannot from the software really say
which ordering of the I/O requests that are optimal.
Bad block replacements mean that block that you might think are adjacent
might physically be very far apart on the disk. In short, the software
haven't a clue to how the physical layout of the disk looks like, and
therefore the software can't really do correct I/O queue optimizations.
Another aspect is once again efficiency. By letting the controller do
the queue optimizations, you unburden the normal CPU from this task,
which otherwise takes quite a few CPU cycles to do.
The controller can play with this while it's doing a transfer and is
just idling anyway, so even if it is a slower CPU, this will end up
being faster.
So from several points of view, this is both more efficient, and leads
to smaller and nimbler device driver, by not having to implement some
issues for efficiency because that is moved to the controller instead.
> Please confirm if my assumption is correct with regard to
> where the "intelligence" is located, i.e. in the device
> driver or the controller / host adapter. Based on the
> answer, it will then be possible to continue this
> discussion. It would be helpful to isolate where the
> decreased efficiency of using the DEC concept of MSCP
> is introduced and what specifically causes the decrease
> in efficiency. For example, on my Pentium III, I have
> noted that when I copy large files of 1 GB or larger,
> it is almost always useful to to no other disk I/O
> during the minute it takes for the copy to complete
> unless the additional disk I/O for another job is
> trivial in comparison and I can usefully overlap my
> time looking a a different screen of information.
> Whenever possible, I also arrange to have different
> disk files which will be copied back and forth on
> different physical disk drives if the files are larger
> than about 32 MB since the time to copy any file (or
> read a smaller file) is so short in any case. While
> I realize that on a large VMS system with hundreds of
> users there will be constant disk I/O, I still suggest
> that the efficiency of the device driver to controller
> interface may play a significant role in overall I/O
> throughput rates.
Well, as to your thoughts above, I think I've covered that now.
As for explanations why you're observing faster operations with the HD:
driver in RT-11, my first suspicion would be that the program don't
issue multiple reads/writes to the controller, but instead issues one
and waits for it to complete before doing the next one.
If the program indees tries to be optimal, then my next guess would be
the device driver not issuing several operations to the same controller,
leading to the same behaviour.
While I admittedly don't know enough about RT-11 to say, and obviously
don't know how your program does it, I know that in RSX, the device
driver do issue the request immediately if possible, and as such you can
have several operations outstanding in parallell. If I were to write a
na?ve copy program, I might not care enough to try to get the disks
working at full speed, which would lead to the same problem you're
observing. However, I know how to write such a program in RSX so that I
really would keep the controller busy at all times.
But that would involve using asynchronous I/O.
Another thing: The MSCP driver in RSX is maybe the most complex driver
there is (in competition with the TT: driver). There is a reason for
this. MSCP is a complex controller.
One interesting thing to note is that RSX device drivers can use I/O
queue optimization. There is support in the kernel for drivers to do
this. And the MSCP driver do have the code for this, but is turned off
by default, since it's mostly useless, for the reasons above. Only if
very many packets are queued will the driver I/O queue optimization even
begin to be used, and then over just the trailing I/O requests that the
controller don't have room for.
Johnny
Will wrote:
> Someone wrote:
>> VAX9000 built of ECL100K, fastest of the fast. The second most common
>> use of TTL was in very high speed instrumentation and specifically frequency
>> counters and UHF PLLs.
> If the 9000 series was till using 100K, DEC must have been sleeping.
> By the 9000s development period, ECL was beyond 100K. I am not sure,
> but 10E may have been out by then. 10G maybe as well, but using that
> leads to insanity.
Using ultra-fast ECL doesn't make much sense when you've got nanoseconds
of delay to the backplane, to the next board, and back to the part that
needs the signal.
The ECL technology used in the VAX9000 was gate arrays with roughly the
same timing parameters as 100K ECL (0.5 to 1.0 ns propogation delays).
> But one thing I notice about DEC's ECL machines (9000, KL10) - for
> being ECL, there sure were ssssllloooowwwww.
KL10 was 100 series 10K ECL technology, typically 3 to 4 ns propgation
delay. A lot easier to build large systems with than say 74F00 stuff but
not a whole lot faster.
Responsiveness of a computer system depends on a lot more than the
speed of the semiconductors used to build it. Plenty of modern examples
of how to make fast silicon seem slow are coming out of Redmond I
notice :-).
Tim.
>
>Subject: Re: TI 990 architecture / was Re: TI-99/4A Floppies
> From: Brent Hilpert <hilpert at cs.ubc.ca>
> Date: Wed, 03 Oct 2007 12:56:06 -0700
> To: General at priv-edtnaa06.telusplanet.net,
> "Discussion at priv-edtnaa06.telusplanet.net":On-Topic and Off-Topic Posts
> <cctalk at classiccmp.org>
>
>Chuck Guzis wrote:
>>
>> On 3 Oct 2007 at 10:13, Peter C. Wallace wrote:
>>
>> > I think I chose the 9900 based on the Osborne book, it had the shortest
>> > benchmark program...
>
>> This overly-simple benchmark with varying assumptions is one of the
>> biggest weaknesses of Volume II of the Osborne books. Perhaps a
>>
>> But even with its weaknesses, the "Introduction to Microcomputers"
>> set was a valuable resource when there was little software and no MPU
>> was yet dominant.
>
>I love that book, for it's overview of lesser known microprocs and being a
>period snapshot of the state-of-the-art. I still find it a useful
>technical reference for a lot of chips for which data is hard to come by.
Same here. I do find that one has to qualify the views of the authors
as having some bias. Some of the CPUs commented as being least likely
actually had excellent longevity. For example 1802, 6100, 6502 and a
few others. It's interesting that at the time of writing the embedded
system market was still in it infantcy and would grow considerably to be
the primary consumer of microprocessors. But as you say it's at least
an overview of many micros to compare or understand at some level.
Allison
>
>Subject: Re: MSCP controllers
> From: "Jerome H. Fine" <jhfinedp3k at compsys.to>
> Date: Fri, 05 Oct 2007 09:51:38 -0400
> To: General Discussion: On-Topic and Off-Topic Posts <cctalk at classiccmp.org>
>
> >Johnny Billquist wrote:
>
>> (Sortof starting a new thread)
>>
>> There have been a discussion about the ineffectiveness of MSCP
>> recently, especially compared to a dumb controller interface.
>>
>> To make a few comments on this; yes the MSCP controller is much more
>> intelligent. But noone have yet talked about what this means.
>>
>> The overhead for playing with the MSCP controller is way much more
>> than for a simple, and stupid controller. However, there is also a big
>> speed gain in some situations.
>> Jerome Fines observations are correct. Under a single-user system such
>> as RT-11 (especially of the software acts in a naive way) much of the
>> advantages of MSCP is lost. The fact that it can deal with large disks
>> (or disks with different sizes) can hardle be called "intelligent".
>> That's really primitive.
>>
>> Things that the MSCP protocol do handle, however, and where the HD:
>> driver will suffer and loose, is when we get into more advanced stuff.
>>
>> The MSCP controller can have many I/O requests outstanding at the same
>> time. Once one operation is completed, it can immediately start the
>> next one. You actually have a zero setup time with MSCP. So if you're
>> doing several I/O operations in sequence, a good driver, in
>> combination with a good program, will be able to get more performance
>> out of the MSCP controller than the HD: driver, which each new
>> operation can only be programmed once the previous operation is
>> completed.
>>
>> The MSCP controller can also complete several I/O requests with just
>> one interrupt. No need for one interrupt for each I/O operation that
>> completes.
>>
>> The MSCP controller can also reorder I/O operations for better
>> efficiency. If you have three requests, jumping back and forth over
>> the disk, it makes sense to actually do the two operations on one end,
>> before doing the operation at the other end. This can be implemented
>> in software by the HD: controller, but then we now have more software
>> that must run before each I/O request is issued.
>>
>> The MSCP controller handles bad block replacement without the
>> involvement of the software. It always present a disk without bad
>> blocks. In real life, all disks have bad blocks, so somewhere this
>> always needs to be handled. Now, if you have a simulated PDP-11, the
>> disk is actually a file on that OS, so the underlaying OS will handle
>> bad blocks for you, so it isn't necceasry for the PDP-11 controller to
>> do this anymore, but MSCP was designed for raw disks, not emulated
>> systems. Dealing with bad blocks on the HD: driver would cost a lot.
>>
>> The MSCP controller can do I/O to several disks in parallel. In real
>> life, controllers like the HD: driver pretends to talk to exists as
>> well. One problem with these are that if you have several disks, you
>> can only do I/O to one disk at a time. Some of these controllers could
>> allow you to do seeks on other disks while I/O was performed on one
>> disk. However, things started getting complicated with this.
>>
>> The MSCP controller have pretty advanced error detection and handling.
>> Including extensive reports to the software on problems.
>>
>> Now, those things are why it's more intelligent. And more intelligent
>> means it also takes more software to talk to it. :-)
>>
>> MSCP is really like serial SCSI (or serial ATA), only done 20 years
>> earlier.
>
>Jerome Fine replies:
>
>Johnny, I believe that your comments are very clear and
>they address many of the aspects which concern the way
>in which MSCP handles read / write requests in both small
>systems (single user systems like RT-11 and even TSX-PLUS
>since the device driver still handles one request at a time)
>and large systems (such as RSX-11 and especially VMS).
>
>(NOTE that all of the following comments are with respect
>to running programs on a 750 MHz Pentium III with 768 MB
>of RAM using W98SE as the operating system, ATA 100 disk
>drives of 160 GB and Ersatz-11 as the application program
>running a mapped RT-11 monitor, RT11XM. While I have very
>good reason to believe that the same relative results will
>be obtained on a Pentium 4 under WXP, again using Ersatz-11
>running RT-11, I have done almost no testing at this time.
>OBVIOUSLY, comparison with real DEC hardware of a PDP-11
>and a VAX can only be done on a relative basis since HD:
>exists ONLY under Ersatz-11. In addition, since the speed
>of disk I/O on the Pentium III (even more so on a Pentium 4)
>is so much faster (more than 100 times) than the transfer
>rate on a SCSI Qbus or Unibus, the comparison could be very
>misleading since CPU time vs I/O transfer time might become
>much more significant. For just one example, when the BINCOM
>program that runs on a real DEC PDP-11/73 is used to compare
>2 RT-11 partitions of 32 MB on 2 different ESDI Hitachi hard
>drives (under MSCP emulation with an RQD11-EC controller),
>it takes about the same time (about 240 seconds) to copy
>an RT-11 partition and to compare those same 2 partitions.
>Under Ersatz-11, the copy time is about 2 1/4 seconds and the
>BINCOM time is about 6 1/2 seconds using MSCP device drivers.
>When the HD: device driver is used under Ersatz-11, the times
>are about 1 second for the copy and about 6 seconds for the
>BINCOM - I have not bothered to figure out why the reduction
>is only 1/2 second instead of 1 1/4 seconds.)
>
>However, I believe that my comments on the efficiency of
>using the MSCP device driver under RT-11 vs the efficiency
>of using the HD: device driver probably need to be analysed
>much more closely. The other aspect of the analysis that
>is missing is the efficiency with with Ersatz-11 implements
>the MSCP emulation as opposed to the HD: "emulation". It
>is unlikely, but possible, that Ersatz-11 has much higher
>overhead for MSCP since the interface is so much more
>"intelligent" than the HD: interface only needs to transfer
>the data to the user buffer based on the IOPAGE register
>values.
>
>A bit more information may help.
>
>(a) The HD: device driver can be used BOTH with and without
>interrupts being active after the I/O request is issued.
>It makes no difference under W98SE since the I/O request
>is ALWAYS complete before even one PDP-11 instruction is
>executed. This result also applies to the MSCP device
>driver which I could modify to see if it might make a
>difference in efficiency. However, when I attempt to
>compare the copy of a 32 MB RT-11 partition with HD:,
>the time difference between using interrupts and not
>using interrupts is so negligible that it is almost
>impossible to measure the total time difference to copy
>the 32 MB RT-11 partition using the available PDP-11
>clock which measures in 1/60 of a second. Since there
>are 60 ticks in a second, the accuracy is better than
>2% over 1 second which seems adequate to determine on
>an overall basis if using interrupts vs no interrupts
>makes a significant difference. Obviously if there is
>no significant time difference at the 2% level (of one
>time tick of 1/60 of a second), then avoiding the extra
>RT-11 code to handle the interrupt does not play a
>major role in the increased efficiency of HD: vs MSCP.
>I conclude that would be the same for MSCP as well.
>
>(b) The other aspect is the ability of MSCP to order
>and internally queue I/O requests based on the most
>efficient order for them to be performed, probably
>when there are many requests outstanding and the
>required head movement can be minimized by choosing
>the order in which to execute the requests - which
>thereby increases overall I/O throughput. If I can
>make a suggestion, I respectfully ask what the interface
>between the device driver and the controller (or host
>adapter in the case of SCSI for MSCP - note that ESDI
>controllers are also MSCP) has to do with efficient
>internal queuing of I/O requests. Perhaps my viewpoint
>based on RT-11 is distorted (or TSX-PLUS for that matter
>which uses almost the identical code as RT-11 as far as
>I am aware), but I ask the question. It seems to me
>that a simple (dumb and efficient) interface such as
>HD: is only the final step in instructing the "controller"
>to perform the disk I/O whereas the actual "intelligent"
>aspect is probably going to be in the device driver
>of the respective operating system such as RT-11, TSX-PLUS,
>RSX-11 or VMS. Obviously the "intelligent" portion can
>also be in the actual controller or host adapter, but based
>on my VERY limited understanding of MSCP implementation
>by both DEC and 3rd party MSCP controller and host adapter
>manufactures for both the Qbus and Unibus, all of the
>"intelligence" of internal queuing of I/O requests for
>the above 4 example operating systems is performed in
>the device driver, if anywhere.
All of the MSCP devices have some form of CPU RQDXn is PDP-11
(specifically T-11) other us Z80 or 8088/80188. Regardless
the CPu engine sued there is considerable "intelligence"
for example the RQDX3 carries the T-11, 8KW or ram and 16KW
of Eprom as well as hardware support for disk(floppy and hard)
IO and bus level DMA.
So on one level your expectation is, if you want an intelligent
reponse from the HD: you need to have an intelligent conversation.
RT-11 however is rather dull in that it's conversation is limited
to "do this", and the device does the simple thing and says "here"
RT deals at the logical block level and if there is more than one
block is a sequential read or write, very plain. RT does not even
have the concept of nonsequential file allocation (scatter gather).
More souphisticated operating systems do things like "get me this",
"write out that" and flush this buffer for multiple users and
processes. So the task list is both dynamic and multiple in its
activity. TSX is still RT11 under the skin and only does simple
operations as a result.
>Please confirm if my assumption is correct with regard to
>where the "intelligence" is located, i.e. in the device
>driver or the controller / host adapter. Based on the
>answer, it will then be possible to continue this
>discussion. It would be helpful to isolate where the
>decreased efficiency of using the DEC concept of MSCP
>is introduced and what specifically causes the decrease
>in efficiency. For example, on my Pentium III, I have
>noted that when I copy large files of 1 GB or larger,
>it is almost always useful to to no other disk I/O
>during the minute it takes for the copy to complete
>unless the additional disk I/O for another job is
>trivial in comparison and I can usefully overlap my
>time looking a a different screen of information.
>Whenever possible, I also arrange to have different
>disk files which will be copied back and forth on
>different physical disk drives if the files are larger
>than about 32 MB since the time to copy any file (or
>read a smaller file) is so short in any case. While
>I realize that on a large VMS system with hundreds of
>users there will be constant disk I/O, I still suggest
>that the efficiency of the device driver to controller
>interface may play a significant role in overall I/O
>throughput rates.
The base intelligence must be on the controller to understand
and act on IO requests. Othe the other hand there is also a
requirement that to use the performance you need to have a
high performing driver. RT-11 does not. The VMS DU driver is.
The difference is the driver for RT11 is essentally a single
task stream, do this, check, do that. The VMS driver will
form up a list of tasks required of the storage system and
so here's a list go do it and let me know when it's done and
what the status for each was.
I do believe what you really pointing at is not MSCP but
the difference in emulation, simulation and real hardware
behavour. I suggest this, emulation/simulation fidelity
has multiple dimensions one being behavour of the programs
code and another is operational speed. It's my experience
with E-11 that program behavour is faithful but speed far
exceeds the original device capabbilities to the limit of
the host PC. The RQDX is old and depends on T-11 and that
CPU is only clocked at 7.5mhz making it rather slow compared
to it's hosts. Where as PC emulation (without throttles) may
have a huge performance advantage is a far faster CPU. So it
makes me ask how would your E-11 simulation look if you could
tell it that the MSCP device and connected disks have a more
limited speed. It also brings to mind is the MSCP emulated
or simple stubbed with PC drivers and devices behind it? I
ask that as MSCP is a copyrighted and encumbered (or was)
protocal.
>I await your reply and wish you a good weekend.
I do hope there is s more sophisticated reply as well.
I have the advantage of useing the RQDX in both Qbus
uVAX and Qbus PDP-11 systems so I've seen how the
the VAX (VMS) uses it and how RT-11, and RSX11 use it
the performance over less complex disks like RL02.
Allison
> From: "James Rice" <james.rice at gmail.com>
> Subject: Re: lead-free solder
> To: "General Discussion: On-Topic and Off-Topic Posts"
>
We have not changed tip or temperature settings in production.
Mike.
> eventually pickup and pass lead based alloys. One question that I have,
> do
> you need a special type of iron for lead free soldering or just a
> different
> tip? I noticed both Hakko and Aoyue shows a different model that is rated
> for lead free duty. They show the same temperature range but are fitted
> with a larger heating element (70w vs 50w).
>
> --
> www.blackcube.org - The Texas State Home for Wayward and Orphaned
> Computers
>