On Wed, 21 Apr 2004 00:03:36 +0100 (BST)
ard(a)p850ug1.demon.co.uk (Tony Duell) wrote:
Moreover, 3.1M pixels in the camera aren't 3.1M pixels in the final
image. It depends how they're used, but in the camera, you
typically need three pixels, one for each of R, G, and B, to get one
RGB pixel in the image. Some techniques use even more (the Bayer
algorithm uses 4).
Argh!. You mean they fiddle the figures? I'd assumed that a
'pixel'
was an RGB triad, not a third of one. So you mean you may only get 1
million points in the image from a 3.1M pixel camera?
Yes. E.g. with Bayer you
have four sub-pixel per color pixel:
R G
G B
So you get 640 x 480 = ca. 0.3 M "true" color pixels with a 1280 x 960
"Mega pixel" sesor. The image processing firmware of the camera
interpolates this later to 1280 x 960 RGB pixels.
--
<rant>
And this interpolation is very apparent when you try to take pictures of
things like PC cards with fine pitch ASICs. Anything with a high spacial
frequency and multiple colors is royally screwed. Digital camera manufacturers
(and scanner Manufacturers) have gotten away with this deception for quite a
while.
There is no reason tha cameras or scanners should be rated any differently
than LCD monitors (where 1024x768 is really 1024x768 triples)
</rant>