ProfessorC1983
Member
There was a thread here a few months back about using SilverFast as a sort of poor man's densitometer, which linked to these very helpful pages:
How to make linear scans with SilverFast 8.8: https://www.sebastian-schlueter.com/blog/2017/2/10/how-to-make-a-linear-scan-with-silverfast-88
Using a scanner as densitometer: https://sites.google.com/site/negfix/scan_dens
Using these tutorials I now have high quality 16-bit linear HDR scans of my negatives, and know how to manually compute the density of a given area of the image.
However I'd like to take this a step further by creating an automated process to calculate (roughly?) the density range of a given linear scan of a negative, so that ideally I can take a pile of scans and easily identify which of them are the highest in contrast (to make the best subjects for alt-process printing which requires a DR of say >1.6).
I was a software engineer in a past life so I have the technical skills to pull this off using a toolkit like ImageMagick, but what I don't know is the best conceptual method for doing so. Here is what I have currently, in pseudo-code:
- Crop to 98% horizontal and vertical (to eliminate any film border I might've left in when cropping the prescan)
- Resize image to 1% (reduce to a manageable number of pixels)
- Calculate histogram of unique greyscale values (still in 16-bit, so essentially return the 1-65535 value of each pixel)
- Parse result, put in numeric order
- For the max and min values, take the log10 of (65535/X) to derive Dmax and Dmin, then take the difference as DR
This seems to "work" as far as it goes, and clearly lets me identify more-contrasty negs, but I have no idea how accurate my derived "density range" is compared to what a real densitometer would generate. The fuzziest part seems to be in deciding how big of an area to sample - is the average value of each 1% block of the image a reasonable approximation? I've also played around with using smaller sample sizes but then throwing out a few values on either end of the spectrum because they seemed to throw off calculations pretty badly.
I recognize it may not be possible to get an exact value but is there anything I could be doing better to get an approximation useful enough to determine suitability for different types of printing methods?
How to make linear scans with SilverFast 8.8: https://www.sebastian-schlueter.com/blog/2017/2/10/how-to-make-a-linear-scan-with-silverfast-88
Using a scanner as densitometer: https://sites.google.com/site/negfix/scan_dens
Using these tutorials I now have high quality 16-bit linear HDR scans of my negatives, and know how to manually compute the density of a given area of the image.
However I'd like to take this a step further by creating an automated process to calculate (roughly?) the density range of a given linear scan of a negative, so that ideally I can take a pile of scans and easily identify which of them are the highest in contrast (to make the best subjects for alt-process printing which requires a DR of say >1.6).
I was a software engineer in a past life so I have the technical skills to pull this off using a toolkit like ImageMagick, but what I don't know is the best conceptual method for doing so. Here is what I have currently, in pseudo-code:
- Crop to 98% horizontal and vertical (to eliminate any film border I might've left in when cropping the prescan)
- Resize image to 1% (reduce to a manageable number of pixels)
- Calculate histogram of unique greyscale values (still in 16-bit, so essentially return the 1-65535 value of each pixel)
- Parse result, put in numeric order
- For the max and min values, take the log10 of (65535/X) to derive Dmax and Dmin, then take the difference as DR
This seems to "work" as far as it goes, and clearly lets me identify more-contrasty negs, but I have no idea how accurate my derived "density range" is compared to what a real densitometer would generate. The fuzziest part seems to be in deciding how big of an area to sample - is the average value of each 1% block of the image a reasonable approximation? I've also played around with using smaller sample sizes but then throwing out a few values on either end of the spectrum because they seemed to throw off calculations pretty badly.
I recognize it may not be possible to get an exact value but is there anything I could be doing better to get an approximation useful enough to determine suitability for different types of printing methods?