Mr Ian Primus wrote:
What is a suitable replacement for CR2? It's hard to
read seeming as though it is missing a chunk. CR1
looks to be the same - but I read it as a 1N4smudge3,
which is rather hard to cross reference. I found the
schematic in the manual, but not a parts list. I'll
keep looking, but I figured I would ask the experts
here.
Unless this was some kind of coincidence, I think I
need to double check my 20ma power supply.
Thanks!
-Ian
Billy: I think it is an 1N4003. I remember losing this diode sometimes
when debugging new interfaces. I always changed it with a 1N4004. The
resistor often burned the PCB beneath it; looks like crap. So I would
replace it with a higher wattage wire wound resistor and suspend it away
>from the PCB by .5 cm or so.
Billy
>
>Subject: 1.2M HD / 300kbps / single-density
> From: "Dave Dunfield" <dave06a at dunfield.com>
> Date: Sun, 18 Mar 2007 20:36:40 -0500
> To: cctalk at classiccmp.org
>
>Today I stumbled across something that may explain why some
>people I've corresponded with have had such a hard time
>getting single-density to work with disk imageing.
>
>For my own setup, I have a couple of trusty Aopen AP5T
>mainboards which do single-density nicely, installed in
>cases with an internal 3.5" HD (1.44M) drive as A:, and
>an external cable allowing me to connect any of the
>following as drive B:
> - 5.25" DD 40 track (Teac 55-G)
Dave,
Don't you want to be using a 55B whish is a real 40 track drive?
> - 5.25" DD 80 track (Teac 55-F)
> - 5.25" HD 80 track (Teac 55-G)
> - 8" DS (Qumetrack 242)
>
>I'm also quite a stickler for using the proper drive when
>I read/write images - so if it's 5.25" 40 track, the 55-G
>gets connected etc. This has worked very well for me.
>
>Today I was setting up another system to do 5.25" disks
>only, and I wanted it to be self contained within one case.
>To minimize the drives, I decided to modify a Panasonic
>JU-475 with a switch on the front bezel to force it to 300
>rpm to serve as both the DD/80 and an HD/80 drive.
>
>To verify that the drive was good, I hooked it up and tried
>to read/write some disks - to my surprise I could not do
>single density (at 300kbos). Tried a couple other HD drives
>with the same result. On a hunch, I modded it to 300rpm, and
>sure enough, I can read/write single-desnity fine at
>300rpm / 250kbps.
>
>(Yeah, I know the SD rate is really only 1/2 - by 250/300kbps,
>I am referring to the MFM settings for the AT controller data
>rate select register).
>
>To rule out some odd ImageDisk quirk, I tried several versions
>of TeleDisk - with the drive at 360rpm and configured as a 1.2M
>HD drive, none of them could read a single-density disk either.
>
>So at least for my AP5Ts, it appears that the internal controller
>can do single or double density at 250kbps, but only double-
>density at the "at compatible" 360rpm rate of 300kbps. My guess
>is that the data separator does not work at that rate (works
>fine at 500kbps however).
>
>I'm curious to know if this is a charactistic specific to the
>machines I am using, or if it is common among PCs that do single-
>density to not work at 300kbps. Has anyone here read and/or
>created single density 5.25" disk on a 1.2M HD drive spinning
>at 360rpm?
There is at least one reason not to do this. The bit shift is greater
due to write speed. Also some integrated version of the 765 (superchips)
select precomp based on clock speed and that may be less than optimum.
Finally the drive used may shift write current based on media speed.
this also ignores the 96tpi and 48 tpi issues that are worse when writeing
an already formatted disk. Any or all of those can undermine relaibility.
Allison
Sorry if this comes through twice. My mailer is acting up.
I know better than to continue an OT thread, but there are
a few points worth considering here.
> Cool down, there. I was merely observing that connectivity (being
> the highway) has trailed Moore's Laws (at least in the US) for quite
> some time.
True in the way Moore's Law is usually thought of. But when
he stated his "law" Moore wasn't talking about speed; he
was talking about the number of transistors. He foresaw
the rate at which transistor sizes could be shrunk. Now,
for a long time, making the transistors smaller made them
faster and the overall CPU was also faster. That's less the
case than it used to be, in part because the on-chip interconnects
don't benefit as much. On top of that, we just kept making
the discrepency between CPU and memory speeds greater
and greater, putting more and more demand on the cache.
So rather than using the additional transistors to create an
implementation that's even faster (e.g. increased pipelining),
we're now using them for additional cache and for multiple
cores. And that's we where start to hit Ahmdal's Law. As the
parallel guys have known for years, not all tasks parallelize
nicely. There's probably not much to gain from implementing
a word processor with multiple concurrent threads. So
what does all this mean? Even though the physicists are
clever enough to keep the "end of Moore's Law" always
10 years in the future (which I've been hearing for at least
20 yeasrs), Moore's law doesn't necessarily buy us the
performance increase we're used to. And that's before
we even look at software bloat.
> Given that the bulk of the use for home PCs is net-
> based; there will come a time that there is simply not enough
> bandwidth *commonly* available to feed a petaflop machine.
>
> Regardless of the needs of home hobbyists, who don't drive production
> of consumer PCs, most people I know of still use the box primarily
> for email and web browsing. I don't consider amateur astronomy to
> be in the "killer app" class.
There is once class of user that's large enough and willing
enough to fork out dollars that there's still a point in pushing
performance as much as we can. That's the gamers, the
guys that keep moving toward a day when their game is
so immersive and so realistic that they can't always tell
whether they're in the game or the real world. And it turns
out that some of the things we need to do in those games
do parallelize nicely. So I expect we will continue to see
some of the continued growing overkill of system performance
for uses like e-mail and browsing. But then the people who
took the web and twisted it from being a solid information
distribution world to an entertainment medium will find
ways to use those cycles.
Now I shall slink off and punish myself with many lashes
of a wet noodle for contributing OT material. My only excuse
is that I mentioned hearing about the end of Moore's law
for 20 years :-)
BLS
I know better than to continue an OT thread, but there are
a few points worth considering here.
> Cool down, there. I was merely observing that connectivity (being
> the highway) has trailed Moore's Laws (at least in the US) for quite
> some time.
True in the way Moore's Law is usually thought of. But when
he stated his "law" Moore wasn't talking about speed; he
was talking about the number of transistors. He foresaw
the rate at which transistor sizes could be shrunk. Now,
for a long time, making the transistors smaller made them
faster and the overall CPU was also faster. That's less the
case than it used to be, in part because the on-chip interconnects
don't benefit as much. On top of that, we just kept making
the discrepency between CPU and memory speeds greater
and greater, putting more and more demand on the cache.
So rather than using the additional transistors to create an
implementation that's even faster (e.g. increased pipelining),
we're now using them for additional cache and for multiple
cores. And that's we where start to hit Ahmdal's Law. As the
parallel guys have known for years, not all tasks parallelize
nicely. There's probably not much to gain from implementing
a word processor with multiple concurrent threads. So
what does all this mean? Even though the physicists are
clever enough to keep the "end of Moore's Law" always
10 years in the future (which I've been hearing for at least
20 yeasrs), Moore's law doesn't necessarily buy us the
performance increase we're used to. And that's before
we even look at software bloat.
> Given that the bulk of the use for home PCs is net-
> based; there will come a time that there is simply not enough
> bandwidth *commonly* available to feed a petaflop machine.
>
> Regardless of the needs of home hobbyists, who don't drive production
> of consumer PCs, most people I know of still use the box primarily
> for email and web browsing. I don't consider amateur astronomy to
> be in the "killer app" class.
There is once class of user that's large enough and willing
enough to fork out dollars that there's still a point in pushing
performance as much as we can. That's the gamers, the
guys that keep moving toward a day when their game is
so immersive and so realistic that they can't always tell
whether they're in the game or the real world. And it turns
out that some of the things we need to do in those games
do parallelize nicely. So I expect we will continue to see
some of the continued growing overkill of system performance
for uses like e-mail and browsing. But then the people who
took the web and twisted it from being a solid information
distribution world to an entertainment medium will find
ways to use those cycles.
Now I shall slink off and punish myself with many lashes
of a wet noodle for contributing OT material. My only excuse
is that I mentioned hearing about the end of Moore's law
for 20 years :-)
BLS
> From: "Hex Star" <hexstar at gmail.com>
>
> > On 3/18/07, der Mouse <mouse at rodents.montreal.qc.ca> wrote:
> >
> > If you can't see the difference between helping people and feeding
> > leeches...*boggle*
>
> I agree with Jim, why is someone uploading big sized files for people to
> download when at the same time they don't expect to spend lots of monthly
> bandwidth? That just doesn't make sense...any file archive and just file
> hosting in general is going to involve investing alot of monthly bandwidth
This "I can do as I want" discussion boils down to respect for others. It is
utterly arrogant (and stupid) to disregard the wishes of someone providing
something for free. Most of us give away our time to help others, but expecting
us to allow others to also just take our money is ... well ... der Mouse put it
best with *boggle*!!!
-----Original Message-----
From: cctech-bounces at classiccmp.org
[mailto:cctech-bounces at classiccmp.org] On Behalf Of James Rice
Sent: 19 March 2007 04:25
To: General Discussion: On-Topic and Off-Topic Posts
Subject: Re: ftp archives disappearing?
On 3/18/07, Teo Zenios <teoz at neo.rr.com> wrote:
>
> If you want a mirror it would be nice just to ask the owner for a DVD
> set of the data which takes little time and doesn't hog bandwidth from
> others who just want a few files.
>
> I ran a private FTP during the later 90's when cable 1st came around
> in my area. I never cared if people connected and downloaded what was
> on the site as long as they didn't use software that logged in
> multiple times to suck down all the bandwidth from the other users,
> and didn't hammer the site every second to get on when the slots were
> full. Everybody is different and you need to respect their rules.
>
>
My employer lets me host my site of their bandwidth. I do throttle
downloads at 1.8kbps and limit connections to two from any IP address.
--
www.blackcube.org - The Texas State Home for Wayward and Orphaned
Computers