On Apr 19, 23:36, Tony Duell wrote:
I didn;t say that digital cameras have no uses, just
that I have no
use
for one. I take almost entirely static subjects, and I
want high
resolution (to be honest 3.1 M pixels is worse than 35mm (I estimate
that
as being about 12M pixles), let alone medium or large
format).
Hi-res 35mm is probably better than that. Kodachrome certainly is; it
can resolve a couple of thousand lines per *millimeter* under ideal
conditions. Even fairly conventional fine-grain black-and-white films
have been able to manage 1000 lines per mm since the 1940s. That means
if you take the full height of a 35mm image, and enough of the width to
match the typical aspect ratio of a digital camera (lets say 3:4), the
image could resolve the equivalent of 24000 x 32000 = 768M pixels!
It's not quite as simple or dramatic as that, of course. It's not just
the resolving power of the film that affects the image quality, there
are irradiation and halation effects to consider, as well as
graininess, lens quality ("the diameter of the circle of confusion" is
a phrase I will never forget). On the other hand, there are digital
artefacts like edge effects to think about too.
Even if you take a normal fine-grain silver halide image, under average
conditions, you'd have about 15M pixels (I found that in a few
references on the web). That's about 5 times more than a 3.1M pixel
digital image -- except they're analogue pixels, in a sense; the size
and colour are infinitely variable, not variable in discrete steps.
Moreover, 3.1M pixels in the camera aren't 3.1M pixels in the final
image. It depends how they're used, but in the camera, you typically
need three pixels, one for each of R, G, and B, to get one RGB pixel in
the image. Some techniques use even more (the Bayer algorithm uses 4).
--
Pete Peter Turnbull
Network Manager
University of York