I scan and stitch 120 images using the 35mm facility of my Epson V300 and PE7. I also make up panoramas from digital and analogue and unless I have left no overlap, or far too little for the software to work on I have yet to find the software join.
Point is simply that with either a shift to MF or LF, the worry about grain clusters recedes and the emphasis shifts to "how fast can I get this scanning thing over with?" - and that's where I'm at.
This has been an interesting discussion, though it seems to have gotten off-track a little. This is not necessarily bad, but it somewhat misses the point of my original posts. My main point in posting was to show how it is possible to get extremely high dpi (and presumably very high resolution as well) using very simple and inexpensive equipment.
In this case I have spent well under $100 in equipment and parts, partly because I used things I already had, but even if I would have bought all of the parts it would not have cost me very much, far less, for example, than something like a FlexTight or drum scanner or even a Nikon scanner, and the Nikon scanner would not be able to touch what this approach is capable of. (This is yet to be proven, but I am confident that it is true.) Of course, a Nikon or FlexTight is a lot more convenient, if one has the money to buy one.
It is also only a tiny fraction of the price of a dslr with a high megapixel rating.
Why did 135 film become the dominant in the first place?exactly. The point I was trying to make is that the physical film itself is one piece of a total system. All the little bits and pieces affect total system resolution and it’s a lot better to focus on improving the total system or start with a reasonably good system and have serviceable images for the vast majority of uses. For 35mm film, a 24MP bayer sensor coupled with a really good macro lens and a solid scanning set up and light source is pretty difficult to top without significantly increasing the amount of time and complexity required to get there, and I would argue that the improvement wouldn’t really be significantly better for why you would go for all that extra resolution, which is print super huge at super high resolution. Why would you do that with 35mm? You can get much better performance going to a larger negative size.
Bayer sensors filters in cameras are often fainter in colour, to make demosaicing and interpolation easier/possible, at the expense of colour depth and resolution.I’d love to see some actual evidence of this on film scanned with a bayer sensor. Both the raw scan before processing to get a color positive, and the processed color positive after.
the color filters are not unique to cameras. Scanners also have color filters that do the same thing.
ICE is not free. It comes at the cost of slight but noticeable blurring.There are reasons for Coolscan's price.
With higher resolution DSLR copying the more porminent the dust and scratches will show. A huge advantange for Coolscan's ICE.
DSLR copying of color negatives require a conversion process. I have yet to see any that is remotely as quick or as good as Coolscan+Nikonscan. A Coolscan 5000 scan takes about 50 seconds per frame with ICE- about 30 seconds without.
Coolscan 5000 has motorized accessories for batch scanning of many slides or whole rolls of film.
As I have already shown, the Coolscan's 4000dpi is more closely a match for at least a 36MP DSLR.
As much as I appreciate the high resolving power of the Coolscan's 4000dpi - I have compared 20" X 30" professionally done optical poster prints from 35mm film, I value the color/contrast fidelity more.
ICE is not free. It comes at the cost of slight but noticeable blurring.
The need for dust removal is largely an effect of using these kinds of holders, that attract dust like crazy.
It was never that much of an issue with enlarging and the little there was, could be taken care of with a blower.
Should it be pathological you can rewash the film.
ICE would in theory be fully possible with a DSLR, since CMOS sensors are also sensitive to IR.
OK, it's not a new idea, but I have been putting together a simple system to scan film using a microscope objective to capture small pieces of the film image and then stitching the results together.
It isn't just the holders, it also is a characteristic of the optical path and light source.The need for dust removal is largely an effect of using these kinds of holders, that attract dust like crazy.
It was never that much of an issue with enlarging and the little there was, could be taken care of with a blower.
I have a Nikon Coolscan LS8000 and find the speed... or more correctly - the LACK of speed means that time I could use in editing the shots, adjusting the contrast, color temp, etc. is lost in the grinding of the motorworks. Scanning 120 film is about 20 minutes or so for 3 frames for prescan and scan. Ouch. The scanning of 300 plus MF shots from a trip to France? The scan that broke the camel's back. Yes Nikon Coolscans are good, but are they really better? Kenneth Lee Graham has some useful tidbits here ( http://www.kennethleegallery.com/html/tech/index.php ) but I'd add that the larger the format, the more it seems as though your attention to the details of keeping the film and film environment clean rise and the relatively less significant scaling of the dust-to-image size becomes. My 20-to-30-year-old (plus) scanner made the trip to the new house, but remains unpacked, and I'm hoping it will stay in its comfort zone. Negative Lab Pro plus Negative Solutions gear is a "better" answer to today's needs the way that 35mm fit the on-the-go needs of the 1960's: Not initially better per se, but enabling a new portability that paid dividends down the road.
Let's not get lost in the weeds here: Nikon and other scanners are/were fine and may serve today with some catch-up maintenance (mine took a bunch of $'s to put back into shape and keep there), but for the vast majority of new-to-filmers these are likely to prove of questionable ECONOMIC merit for the years ahead.
Bayer sensors filters in cameras are often fainter in colour, to make demosaicing and interpolation easier/possible, at the expense of colour depth and resolution.
can you point me to some documentation on the web somewhere that backs this up? I’ve long studied bayer sensors and would love to read any research done in regards to its color/spatial performance.
that being said, how exactly does a fainter color filter make it easier to demosaic or interpolate? I’m asking because I’ve written a fair amount of code that does exactly that and I’d imagine that the color filters do vary from camera model to camera model, and for some reason the algorithm to demosaic the sensor is exactly the same between camera models.
what it does effect is the color matrix used to conform the raw colors to a color space, however once you’ve characterized that and have a color matrix, the color performance tends to be very high. You also have to do the same thing when you have a full RGB per pixel like in a scanner. The only difference is no demosaic step.
can you point me to some documentation on the web somewhere that backs this up? I’ve long studied bayer sensors and would love to read any research done in regards to its color/spatial performance.
that being said, how exactly does a fainter color filter make it easier to demosaic or interpolate? I’m asking because I’ve written a fair amount of code that does exactly that and I’d imagine that the color filters do vary from camera model to camera model, and for some reason the algorithm to demosaic the sensor is exactly the same between camera models.
what it does effect is the color matrix used to conform the raw colors to a color space, however once you’ve characterized that and have a color matrix, the color performance tends to be very high. You also have to do the same thing when you have a full RGB per pixel like in a scanner. The only difference is no demosaic step.
can you point me to some documentation on the web somewhere that backs this up? I’ve long studied bayer sensors and would love to read any research done in regards to its color/spatial performance.
that being said, how exactly does a fainter color filter make it easier to demosaic or interpolate? I’m asking because I’ve written a fair amount of code that does exactly that and I’d imagine that the color filters do vary from camera model to camera model, and for some reason the algorithm to demosaic the sensor is exactly the same between camera models.
what it does effect is the color matrix used to conform the raw colors to a color space, however once you’ve characterized that and have a color matrix, the color performance tends to be very high. You also have to do the same thing when you have a full RGB per pixel like in a scanner. The only difference is no demosaic step.
I'd be hard pressed to do it off the cuff, without sinking an hour or two into digging the sources out.
Alans links looks good though.
Also do a search on David Mullen the cinematographer. He shoots both film with zest and is a RED ambassador.. Especially noteworthy is his postings on the ASC site. He has quite a good technical grip on what goes on in the sensor and immediately after when the image is read off the sensor and importantly in this context the advantages of film with colour registration.
The idea of having a non fully saturated RGB filter array, is to make it easier to extrapolate luminance information, without guessing so much.
The bet is that you can still make a reasonable guess at colours with better demosaicing, and people will prefer the apparent higher resolution to better colour precision when they pixel peep.
As mentioned, somewhat the same idea as having more green sensor sites to up the spatial resolution, just taken a step further.
Some manufacturers have even experimented with CMY instead of RGB for the same reason, with little success though.
Colour resolution is kind of more ephemeral than luminance, especially to explain with words. Show some clear visual examples and it becomes clearer. And the math behind it, is more or less bullet proof.
This link may contain some of the information you are interested in . It is a database of spectral sensitivity curves measured for various cameras.
https://nae-lab.org/~rei/research/cs/zhao/database.html
Here's a link to their result for the Canon XTi,
https://nae-lab.org/~rei/research/cs/zhao/files/canon_400d.jpg
and here's a link to their results for the Sony dxc 9000.
https://nae-lab.org/~rei/research/cs/zhao/files/sony_dxc_9000.jpg
That's a very nice offer!Guys, I have a V700, a Nikon 9000ED and a DSLR scanning setup that supports single shot and stitched capture I was quite happy with before I started buying scanners. I've been meaning to scan the same negative with all three for a while now but haven't gotten around to it. Tomorrow seems as good a day as any. Is there an particular procedures I should follow in my test that you guys would recommend so that it will, if not answering this question once and for all, add a good data point to the discussion?
Well, as you probably noticed already, something interesting though not surprising, can be observed if you compare between the response curves for the 3 CCD cameras and the Bayered CCD/CMOS sensors.thanks for the links, I’m still reading through it, but have some initial concerns... they don’t appear to be looking directly at raw sample data. I say this because they make no mention of what raw multipliers they’re using for each color channel, and that will effect the outcome a lot. Every digital camera has a native white balance (along the amber/blue axis) where the multipliers between the red and green channels will be the same. The same for the green/magenta axis. If you don’t use the same multipliers between the color channels, you really significantly distort what the color response would be.
out of the box, if you use 1.0 for the multipliers for all three channels, you’ll quickly discover that the green channel almost universally is most sensitive and has the highest response (which is not reflected by the data you are referencing) relative to the other two channels. You’ll also discover that there is a spot on the WB Kelvin scale that produces the same output between the red and blue channels, and that manufacturers tune the color gels that they put in the bayer array to better suit how the camera is intended to be used. For example, the Canon 80D (a camera I’ve studied very intensively as it was my main scanning camera from its release to just a few months ago) is intended to be a general purpose camera. It’s native white balance is 5200K. A good all around general purpose WB. If you shoot under light that is 5200K, it has excellent color performance and dynamic range. Conversely, the Canon EOS RP has a native white balance of 4500K. Something a little better tuned to shooting indoors under artificial light. A camera intended for studio use with flash will have a native white balance closer 6500-7000K, and one intended to shoot video under tungsten light will have one closer to 3200K.
my point is, the linked info is interesting, and I’m not saying it’s invalid for their stated use case, I’m just a little concerned with how they’re going about gathering the data. Btw, I have a digital rebel xti, though haven’t really bothered to look closely at its WB response.
thanks for the links, I’m still reading through it, but have some initial concerns... they don’t appear to be looking directly at raw sample data. I say this because they make no mention of what raw multipliers they’re using for each color channel, and that will effect the outcome a lot. Every digital camera has a native white balance (along the amber/blue axis) where the multipliers between the red and green channels will be the same. The same for the green/magenta axis. If you don’t use the same multipliers between the color channels, you really significantly distort what the color response would be.
out of the box, if you use 1.0 for the multipliers for all three channels, you’ll quickly discover that the green channel almost universally is most sensitive and has the highest response (which is not reflected by the data you are referencing) relative to the other two channels. You’ll also discover that there is a spot on the WB Kelvin scale that produces the same output between the red and blue channels, and that manufacturers tune the color gels that they put in the bayer array to better suit how the camera is intended to be used. For example, the Canon 80D (a camera I’ve studied very intensively as it was my main scanning camera from its release to just a few months ago) is intended to be a general purpose camera. It’s native white balance is 5200K. A good all around general purpose WB. If you shoot under light that is 5200K, it has excellent color performance and dynamic range. Conversely, the Canon EOS RP has a native white balance of 4500K. Something a little better tuned to shooting indoors under artificial light. A camera intended for studio use with flash will have a native white balance closer 6500-7000K, and one intended to shoot video under tungsten light will have one closer to 3200K.
my point is, the linked info is interesting, and I’m not saying it’s invalid for their stated use case, I’m just a little concerned with how they’re going about gathering the data. Btw, I have a digital rebel xti, though haven’t really bothered to look closely at its WB response.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?