Subquestion Number one for me would be the reasoning why
those probes are so important. How bad are readings taken
with just direct wired connections?
Very! Using a normal piece of cable will load the signal under test
,mostly due to the capacitance of the cable. This means the signal is no
longer what it was before you connected the cable. So you're not seeing
the real picture. Worse than that, the mangled signal may well make the
rest of the device-under-test malfunction.
A simple DC experiment to show what I mean. Connect 2 10M resistors in
series across a 10V supply. I think we agree that there's 5V dropped
across each resistor. Now measure the voltage across one of them with
your normal DVM. Unless you have a really exotic DVM, I'll wager you will
not read 5V. You'll read 3.3V. The reason is that the DVM has a 10M input
impedance, when you connect that in parallel with one of the 10M
resistors in your circuit, you effectively get a 10M in series with a 5M.
Now for the 'scope it's even worse. You're dealing with AC signals so the
capacitance of the cable matters. Worse, the impedance presented to the
circuit depends onthe frequecy. The edges of square waves get rounded
off. If you're not careful you'll get LC 'ringing'.
The idea of the probe is to make a potential divider from the 'scope input
and components in the probe, typically a /10 ratio. And to 'compensate'
the capacitance of the cable/scope input so that the potential divider has
the same division ratio at all frequencies. It turns out that if you shunt
(parallel) the resistors of a potential divider with capacitors, then the
potential divider is frequency-independant if the time constant of the 2
sections is the same. In the case of the 'scope you can't do anythign
about the capacitance of the 'scope and cable, so you connect a
compensating capacitor in parallel with the resistor in the probe. And
since that resistor has 9 times the resistance of the 'scope input, the
capacitor you use has 1/9 the capacitance (to keep the time constants
the same> The circuit sees that capacitor in series with the capacitance
of the 'scope input/cable, in other words a lot less than the latter on
its own.
For very high frequncy work, you put a buffer amplifier actually inside
the probe. This is designed to drive the capacitance of the cable and
scope, but to have a low input capacitance itself (and since it is in the
probe, there's no capacitance of the cable on the input side to worry
about). It's relatively easy to put a transistorised amplifier in the
probe, but I even have a Tekky 'cathode follower probe' that has a
sub-miniature triode valve inside (!).
Should a scope rating 2X the computer clock speed keep the
readings useful?
Make that 3* the clock speed IMHO. You really want to see at least the
3rd harmonic of any signal.
Watch out for bandwidth specifications. The correct way to specify them
(IMHO) is the point at which the gain of the amplifier is 3db down from
its maximum. Reputable companies do this.
But the same guys who make 300W PC speakers powered by a 12W wall-wart
seem to have got their hands on 'scope specifications too. I've come to
the conclusion that for some companies, a '50MHz scope' is one where if
you crank the Y gain up to maximum, apply the maximum allowed voltage to
the input at 50MHz, then you might see a trace that bears some sort of
resemblance to the input signal...
-tony