Van Burnham wrote:
If you were using Fotolook 2+ scanning software for
the Arcus you possibly
engaged the "descreen" function (set to the correct line screen) which
Yep, that was it.
automatically adjusts the scanner to compensate for
screen angles and
minimizes moir?...however it is possible to adjust a moir?ed or "color
One of the easiest methods is Noise>Despeckle...this will usually do a fair
I've tried that.
You can also try re-interpolating the image by setting
your interpolation
to Bicubic, changing your image size to 200% (resampled) and then reducing
Why? I would think that sequence would be completely reversible.
One pixel becomes four pixels and then back again.
back to it's original size...this only works if
maintaining sharpness is
not a factor and can sometimes exacerbate the problem. Also, Noise>Dust &
Scratches or a mild Blur>Gaussian Blur of 1 pxl or so will usually do the
trick but again, you are going to lose detail (running an unsharp mask of
30-50% can help)...
Tried that too. That worked the best of what I have tried so far.
It is _always_ best to correct these problems at the scanning stage...that
is the only way to both maintain quality AND optimize
sharpness/detail...quick fixes will almost always degrade the image.
Well that was my experience with that particular scanning software but I just
want to know how they did it so I can reproduce the results in something
else.
I was thinking if your CCD has 300 DPI and you scan at say, the dot frequency
of 133 dpi, does it really sample all 300 pixels in an inch and then resample
downwards or does it just sample the CCD pixel that is closest to the dot
being scanned? If it does the former, then scanning at 300 DPI and then
down-converting to 133 DPI in Photoshop should be exactly equivalent; if it
does the latter, then I would expect really bad results because sometimes the
CCD element being sampled falls on a screen dot and sometimes it doesn't. In
reality the dot size is supposed to correspond to the darkness of that color;
so if you had an optical means to sample the entire "cell" consisting of the
screen dot plus the white space around it, the whole cell and nothing but the
cell, then you could get the color right for that cell, but the cells are not
an orthonormal array; the screens are slanted (and to make matters worse,
differently for different colors). So I think the optimal solution might be
to sample at a very very high resolution (many pixels per screen dot), edge
detect the screen dots, use that information to break up the image into cells,
average the darkness for all the dots in each cell, use that color to fill the
entire cell, and then resample that image downwards to the maximum achievable
resolution (whatever that is). As long as screen dots don't overlap you could
even do that separately for each color. I don't know if any existing software
does that. Maybe Fotolook did. Or maybe that's overkill; maybe just a few
samples per screen dot is enough. But I'm pretty sure skipping any of the
CCD's available dots is not a good idea, and I don't know if that's what my
scanner does when I tell it to sample at 133 dpi. (I have an HP IIcx.)
Not to ba a prude...but what on earth does this have to do with the
discussion regarding the collecting of classic computers? Archiving?
Not much but it is at least an interesting tangent isn't it?
(Sorry if we're boring anybody.)
--
_______ KB7PWD @ KC7Y.AZ.US.NOAM ecloud(a)bigfoot.com
(_ | |_) Shawn T. Rutledge
http://www.bigfoot.com/~ecloud
__) | | \_____________________________________________________________