Rob MacKillop
Member
I see what you are getting at, and you might well be right...Thanks.
Have you tried rescuing the first scan by using the Shadows adjustment in your post processing program? I've yet to see anyone present multi scans that do better then a shadow adjustment. The whole point with multiscan that no one can explain is how more light can penetrate the denser parts of the film on a second scan? It's not cumulative like adding a second spoon of sugar into your tea or coffee. You have to assume the first scan is transmitting the maximum amount of light designed into the scanner by the manufacturer. That's what the dMax rating is all about. So you can't increase it. Does the scan speed slow down although I can't see that making a difference either? My V600 captures the full histogram range of the shot in one pass. How would a second pass make it brighter? You can't get blood from a turnip.No problem using my 9000F II with Silverfast in HDR mode, which involves multiple passes and increases the scanned dynamic range. I've rescued ISO 400 shots two stops under-exposed using this set-up, which is a scenario I'm not likely to need in future after I recognised how hot my meter was running.
How would a second pass make it brighter? You can't get blood from a turnip.
I think it looks great this wayHahahahahaha....my first colour scan, and for some reason it inverted.
What you're describing is setting levels, the white and black points. Even if you set them on let's says 0-125 for one scan and 125-255 on the second scan (or to the bottom and top that was exposed), you're still limited to the dMax of the scanner.It's not boosting the dmax qualities of the scanner; it's redistributing the digital code values around so you have more data in the darker areas to manipulate.
Linear scans essentially throw away 1/2 of your code values in the super bright whites. By redefining where the uppper and lower code values start and stop, you can have more values in the shadows to manipulate and not have the image fall apart.
I understand the point you are making. It's the reason I set black and white points before the scan. But from a practical standpoint, I haven't seen the difference when comparing scans of the same picture between 0-255 and let's say 20-200 (what the histogram shows is the range of the picture). That could be because even with one scan at 0-255, there is sufficient information in the bits that additional scanning and mapping select portions won't give. Using additional scans or setting levels will not give greater "penetration" in the denser areas that simple "shadow" adjustments in post processing provide. If effect, the dMax stops fancy footwork with multiple scans or even adjusting the levels before the scan. Adding more code values to the denser areas won't give more meaningful data anymore than raising resolution to 6400 will give more actual resolution. I don't know how to prove it. But I never seen the difference in my own scans after post processing adjustments. Nor have I seen others post comparative results that show any differences that could not be obtained from simple shadow adjustments in post.Alan, It has to do with how values from light to dark are encoded into bit values. No, the scanner's dmax capabilities do not change, but re-arranging how the values are mapped to digital codes can help recover details that would otherwise be lost, or not differentiated, by having too few code values.
Sensors encode light values in a linear fashion; i.e., it takes a doubling of a code value to equal half of the prior intensity. Sensors encode from lightest to darkest. So, in order to encode the 1st stop of your film's transmissible density, you use (in 8 bit, for example) 255 to 128 code values, the next stop is 128 to 64, the next 64 to 32 and so on. See how your code values are diminished toward the most dense portion of the scan? See how that is a problem?
Now, depending upon if you are scanning a negative or a positive, this poses different problems.
If you take two or three scans to later combine, and progressively place the dmin well below your dmin point and place an inverse curve on the scan, you expand the dense areas to allow more code values to be mapped to those regions. Software can sample and combine the two or three scans to give you better code values over all to manipulate.
It's a rushed explanation, but I'm doing the best I can with the time limitations I have.
The scanner's lamp light can only penetrate so far through the film. Additionally, the scanner's sensors are rated and adjusted to pick up so many photons of light. Adding more bits to the DR range will give you no more range than upping the scanner's resolution to 9600bits will give you more resolution then the scanner's optics can provide. The sensors can only get so much information. Multiplying the data captured with similar software bits will not provide more resolution. Nor will splitting up the scan into two scans allow the scanner to see through the film's denseness.
On paper that sounds good. But how do you design scanner that waits so long to gather all those photons in the dense area without blowing out less dense areas. And then on the multiple scans have to determine which areas are blown out because they are or aren't really blown out. Then combine all that data to get a meaningful scan of the original film? There is also a practical limitation of the sensor to gather data over too long a period of time without distortion maybe also coming from adjoining lighted areas. My eye can stare all night at a black portion of the night sky gathering photons of some dim star. But my brain still never sees it.But the later part of your statement above is flat out wrong! For a constant light source the amount of photons arriving at the sensor varies by the optical density of the film or whatever thing is between the light and the sensor. Pick any density you like say as an extreme example, an optical density of 5 (more than most films), which means that 1 out of 10000 photons that hit one side of the film, will arrive at the other side. Now if you wait until 1 mil photons hit the film you will get 100 photons to hit your sensor, if you wait until 100 millions photons hit this film you will get 10000 photons on your sensor and so on. You just need to wait until a measurable amount of photons arrive. This is indeed why your DLSR has a shutter speed dial... and why the epson scanners are cable of varying the exposure time...
Any film that has less then an infinite density will allow some photons to pass, so it not a case of getting blood of out stone, because all films always allow some light to pass, assuming of course there is some image there in the first place...
I think you ought to send that suggestion to Epson.![]()
But how do you design scanner that waits so long to gather all those photons in the dense area without blowing out less dense areas. And then on the multiple scans have to determine which areas are blown out because they are or aren't really blown out. Then combine all that data to get a meaningful scan of the original film? There is also a practical limitation of the sensor to gather data over too long a period of time without distortion maybe also coming from adjoining lighted areas.
Basically I do the same thing except no curves or any other adjustments except setting the black and white points. Sometimes I don;t even do that. Just everything is flat. It appears to me that Epsonscan is just applying the curve after the scan. There's no hardware change in the scan process. So whatever data bits I would get from setting curves with Epson, are just offsetting what the scanner is pulling out. I could do that in post.Alan, Ah I knew I shouldn't have started this discussion until I got home!
Regardless of polarity, I scan to an intermediate file with black and white points set "normally", but with a pronounced curve imposed upon the scene that give me a light, super low contrast image. There are practically no pure blacks or pure whites (rather, dmin and dmax) and it distributes the image information fairly evenly across the histogram. I then save this as an uncompressed TIFF file of 16 bits minimum. I use this file as a my "negative" and, using curves again, pull everything back down to it's proper place, rearranging and enhancing the areas I wish to emphasize with the expanded code values now present.
My Epson V600 doesn't change speeds. Which Epson's do?No need, they already know as it is built into the scanner in the first place, no need to mention it either to Nikon, canon etc as they got the memo too...
How? Each of those questions has technical solution, that has been solved... BTW they also got that memo too
I will attempt to answer each of questions:
Q1. But how do you design scanner that waits so long to gather all those photons in the dense area without blowing out less dense areas? A1. you design one with the most dynamic range (this is of course fixed) so there will be a limit, so then you move onto multiple exposure...
Q2. And then on the multiple scans have to determine which areas are blown out because they are or aren't really blown out. Then combine all that data to get a meaningful scan of the original film? A2. That is what computer do very well... This combined with fact that CCD have a largely linear response makes the job simple. A modern smartphone with HDR or a high end "Hollywood" movie camera will do the multiple exposure at the "same time" (actually one exposure straight after the other) almost as if it was just one exposure...
Q3. There is also a practical limitation of the sensor to gather data over too long a period of time without distortion maybe also coming from adjoining lighted areas. A3. Yes noise is an issue, and that can be improved, you may recall the early DSLRs were not all that good with long exposures... But also remember the image quality at the extreme end of the toe or shoulder is not very good in the first place, but it is there!
My Epson V600 doesn't change speeds. Which Epson's do?
... But the final issue is "show me". I've been asking people to show me comparisons for years.
The V600 fastest speed is 21msec per line. It slows down if you add ICE to remove spots. (I think it scans twice, but since I never use ICE I'm not sure.) But that has nothing to do with your statement. It doesn't change what data is gathered during the scan but does apply post scan processing changes unless you scan flat. Also, the purpose of what you call the calibration area is not calibration but rather to tell the machine which film holder you are using - 35mm or medium format in the case of my V600.Yours does, most of the epsons do, and you can show yourself by...
Take a piece of processed leader film and place in the calibration area, or just put the film holder on back to front such that the extremely dense plastic covers the calibration area (the little cut-out on short side of the film holder). The scan will should take longer, with the dense plastic it will take ages, and scans will be totally blown out...
If you want to trial other software that allows you directly control this then that is up to you...
If you can't show the benefits with yours or someone else's scans, why suggest doing it? Multiple scans waste a lot of time.I recall one such example that someone posted and the noise difference was obvious to me, I am not going to post examples but I am happy to post explanations along with methods for other to conduct there own testing. IMHO it up to the photographer to decide if the difference is relevant to their work.
Also, the purpose of what you call the calibration area is not calibration but rather to tell the machine which film holder you are using
If you can't show the benefits with yours or someone else's scans, why suggest doing it?
Not sure what white calibration has to do with density and scan speed and dMax because the sensor area is improperly covered and you get a bad scan. I'm assuming normal operating procedures.If you say so,I think you need to send that correction to epson as well
If this area is covered, the scanner will perform an incorrect transparency white calibration reading and cause your scan preview and scanned image to look washed out and/or be marred by lines.
Perhaps you might try the test I suggested.
I am not suggesting anyone really to do anything. I am just answer questions and correcting inaccuracies to perhaps increase someone understanding by providing practical examples that someone can test for themselves if appropriate. Or maybe I am just wasting my time...![]()
but I'm worried about banding in the following image.
Not sure what white calibration has to do with density and scan speed and dMax because the sensor area is improperly covered and you get a bad scan. I'm assuming normal operating procedures.
Processing. If development is uneven it leaves such streaks be it dip and dunk lab or inversion home processing. The streaks probably align with the gaps in the reel the film was processed on. Turning the tank 2 turns clockwise after the first inversion and counter clockwise after the 2nd inversion throughout the developing time smooths them out. Turning in one direction only creates a different type of streak.Dear Gents, I wasn't going to post more images here, but I'm worried about banding in the following image. I also have a scan by a professional developer, and it too shows banding, so I don't think it's the scanning process, although that might have exacerbated it. Your thoughts appreciated.
How do you adjust the B&W points in Epsonscan with No Color Correction selected?One thing about scanning without color corrections. You can have everything unchecked on the front page but still auto correct. If you notice the button on the bottom of the scan page, there's a button called Configuration. When you hit it, you'll go to another page called Configuration and will have to check No Color Correction to eliminate color corrections. You can still set white and black points however.
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links. To read our full affiliate disclosure statement please click Here. |
PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY: ![]() |