Being able to scan up to large format would always make it more desirable!![]()
![]()
Not if it means it'll cost 10 times more!
Last edited:
Being able to scan up to large format would always make it more desirable!![]()
![]()
I have a scanner that scans a roll at 4.000dpi in 10 minutes. So, it's dog slow compared to the 2min requirement of this time and age where people shoot and scan billions of rolls per year and speed is of utmost importance...
Still, I feel that it only wastes less than a minute of my time*. This is the time that it needs to pull in the entire film in order to make previews of all the frames. After that it immediately starts ejecting the film while scanning at full resolution. I rarely finish my adjustments to every frame before the scanner is over with making full res raw scans. The final step exports final images with my corrections to desired location in desired format (tif, jpg). This step is basically limited by HDD/SDD speed. In my case it takes a couple of seconds.
* Actually, I don't consider the initial time needed to make previews a lost time. Previewing entire roll first can be quite beneficial. In case of negatives the software can make better initial inversions if it can analyse all frames on the roll and can adjust sensor exposure time for under/overexposed frames when doing full res scans.
My scanner is a Noritsu LS-600 with EZ Controller software. All Noritsu scanners work like that.
Thanks for your help. I just ordered the card. Now I'm going to try and find a simple computer that doesn't have ghosts of Bing and HP and so on.......
Maybe I'm missing something, but why not a refurb Dell desktop. Erase the drive and install your own OS. But you make have to go to a workstation model to have room for full height cards.
I bought a new Dell workstation tower. Cheap, nothing but Windows Pro 11. Has room for 3 full size cards. I needed a new machine. I swear if it comes pre-loaded with a bunch of crap I'm going to send it back.
Just remove the crap. It's not rocket science.
Anyway, could this be discussed in a different thread? Feel free to open something in the Lounge or something.
It's always preferable to use at least a 16-bit sensor, especially for black and white film which can reach a D-max of around 3.0.
If we assume a 14-bit sensor, there's very little code value resolution for highlights.
CV 16383 - 100% transmittance - 0.000 density
CV 8192 - 50% transmittance - 0.301 density
CV 4096 - 25% transmittance - 0.602 density
CV 2048 - 12.5% transmittance - 0.903 density
CV 1024 - 6.25% transmittance - 1.204 density
CV 512 - 3.125% transmittance - 1.505 density
CV 256 - 1.562% transmittance - 1.806 density
CV 128 - 0.781% transmittance - 2.107 density
CV 64 - 0.390% transmittance - 2.408 density
CV 32 - 0.195% transmittance - 2.709 density
CV 16 - 0.097% transmittance - 3.010 density
16-bit scanners can keep the highest densities above 128. So there should be a noticeable difference in noise.
That being said, is 16 bit better?
Yes, 3.0 can be reached by BW films, but in my experience, that's pretty aggressive exposure and/or processing. Fogged film base can reach that, but if exposed and processed reasonably, most pictorial information tops out at ~2.4. I use a 14 bit sensor, and put the film base plus fog at ~8000-10000 for the green channel, and rarely see code values lower than 32 (the rough equivalent of 2.4 density if FB+F was at CV 16383), and that's for extreme specular highlights. Regular highlights are in the CV 128-64 range.
That being said, is 16 bit better? You betcha. It gives you way more discrete tone values to work with, and if you keep the FB+F as close to ADC saturation as possible, then you have enough SN ratio in the denser parts (and discrete tone values) that noise just isn't much of an issue.
If done well.
It's easy to request 16 bit resolution instead of 14 bit.
It's not so easy to make a signal path that actually yields the additional data and not just noise when going from 14 bit to 16 bit.
People forget too easily that an A-D system like a scanner really starts in the A world...that's analog.
For color negative in particular, most people use high CRI white illumination and neutralise the base via white balance, but that's effectively analog gain
Adjusting the gain of individual channels before digitization
If I take two shots of the same grey card under the same conditions using different white balance settings of the camera will I get two raw files with identical pixel values but different white point levels?That's what digital cameras do NOT do. White balance can be adjusted after the shot had been made on the raw (digital) data. That's one reason why people shoot raw.
If I take two shots of the same grey card under the same conditions using different white balance settings of the camera will I get two raw files with identical pixel values but different white point levels?
Yes, indeed. The only difference is that each raw file will have a different color temperature tag embedded in it. This is why the images may look different once you load them in your raw processor, since that will display the image using the "as shot" white balance.
I wonder if all digital cameras use this approach
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links. To read our full affiliate disclosure statement please click Here. |
PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY: ![]() |