On 11 May 2010 at 21:52, Tony Duell wrote:
Since they are in series, they are both passing the
same current.
Therefore the power disipated is porportional to the resistance. A 40W
bulb will have a higher resistance than a 100W bulb, so, making
reasonable assumptions about the bulbs, the 40W one will be brighter.
Yes, I was sort of hinting at a possible solution to the original
problem--in fact, the resistance difference becomes much larger as
the 40W lamp begins to glow. If you put the series-connected lamps
on a variac and slowly inch the voltage up, it's surprising how the
positive temperature coefficient of resistance starts kicking in.
But most people will, without thinking, say "Oh, a 100W bulb is
brighter than a 40W, so the answer is obvious." And wrong.
But then we learn more when something doesn't work the way it's
"supposed to", than when it does, don't we? That's one issue I have
with a lot of secondary-school laboratory courses. You do the
experiment, can pretty much guess from the course material what's
going to happen and it does. Time for lunch.
Back when I was taking a course in numerical methods, I had a teacher
who'd assign seemingly easy programs. You had a week to code it up;
just about any text would have the method documented. He'd post the
data set the day before the assignment was due.
Of course, after about the first two assignments, you learned that
his data were going to be pathological and destined to break any "out
of the book" method. In a previous life, he'd worked for NASA and I
suspect, learned things the hard way.
That I can remember the course (mumble) years later, is a testament
to its effectiveness.
--Chuck