Ian Tindale
Member
How does 'wasting' some of the top and tail of the scan histogram help the result, when the scanner's software performs an automatic assessment of the image?
For example, in Epson Scan, it repeatedly trims the shadow of the input range by quite a bit, and trims the highlight, by quite a considerable amount more, then positions the output range to start at approximately the same amount of shadow that it trimmed, and presumably roughly the amount of highlight that it trimmed.
This means that the shadow range will always be truncated somewhat, not showing as smooth a gradation to maximum shadow as it could have done, and not allowing the output range to go down as far as the true black level either. Similarly, the highlight will always be clipped to a certain extent and correspondingly the output range will never reach the maximum brightness of the potential highlight.
Admittedly this gives a very punchy and mass-market acceptable contrasty image, but is this the way it should be done all the time?
When I do it manually, I top and tail the input histogram right up to the limits of the data (perhaps pinching the highlight inwards just a miniscule hint) and I bring the output limits all the way out to maximum at either end. Then I create (or re-use) a transfer curve that approximates the tonal distribution that I remember the original scene showing. Typically this might mean that I'm bumping the highlight away from linear into a more curved shape near the extreme highlight. This way I keep the tonal range and maximise use of the output range.
For example, in Epson Scan, it repeatedly trims the shadow of the input range by quite a bit, and trims the highlight, by quite a considerable amount more, then positions the output range to start at approximately the same amount of shadow that it trimmed, and presumably roughly the amount of highlight that it trimmed.
This means that the shadow range will always be truncated somewhat, not showing as smooth a gradation to maximum shadow as it could have done, and not allowing the output range to go down as far as the true black level either. Similarly, the highlight will always be clipped to a certain extent and correspondingly the output range will never reach the maximum brightness of the potential highlight.
Admittedly this gives a very punchy and mass-market acceptable contrasty image, but is this the way it should be done all the time?
When I do it manually, I top and tail the input histogram right up to the limits of the data (perhaps pinching the highlight inwards just a miniscule hint) and I bring the output limits all the way out to maximum at either end. Then I create (or re-use) a transfer curve that approximates the tonal distribution that I remember the original scene showing. Typically this might mean that I'm bumping the highlight away from linear into a more curved shape near the extreme highlight. This way I keep the tonal range and maximise use of the output range.