To finally throw the dedicated scanner color superiority myth out the window once and for all.
Besides, this thread is about resolution. The images aren't color-balanced to match, they're Negmaster defaults. I posted several disclaimers above.
Never tried ColorPerfect, but I use both NLP and Negmaster. Their authors have very different philosophies towards color inversion, if you will.
Both of them give you a 16-bit image to apply further tweaks to, but Negmaster output is more malleable and colors don't jump all over from scene to scene like they do with NLP.
- NLP takes the aggressive guessing approach: give me an image, no matter what, and I will try to guess what the most pleasing colors for it should be. It is optimized for one-click simplicity and speed. In the latest version you feed it an entire roll, click a single button, and you'll get mostly decent results by default. I am not surprised it's so popular.
- Negmaster takes "I am RA4 paper" approach, for the lack of a better description. Its algorithm feels far simpler, but more consistent. It is extremely sensitive to digitization exposure. So much so that I recommend exposure bracketing. Its workflow is not as quick. First step is to apply a custom DCP profile which strips Adobe color science [1], then you trasnfer the file into Photoshop and run the plugin. Batch processing is so primitive that it's fair to say it's absent.
Usually I approach them this way:
There's also Negmaster BR, which is a completely different product. I have not tried it because it's meant to be used with scanners only. Knowing the author, I'm sure it applies a similar color philosophy.
- Apply the Negmaster DCP profile in Lightroom, open in Photoshop and invert manually. That's the golden standard. No automatic tool can beat that. That's how I easily matched color from all of these scanners. For high quality "portfolio" images this is where it stops. Everything below is to speed up batch processing for vacation rolls or everyday snapshots.
- Select a sequence of shots of the same scene / light conditions and run them through NLP. If the output is close to the reference image above, I am done.
- If some images aren't quite there, I run them through Negmaster conversion and then tweak them into shape in PS, usually just a couple of slight curves.
If I were to choose one, it would be Negmaster.
P.S. I am also fond of grain2pixel. It produces really nice flat & desaturated TIFFs that are easy to edit. It is also free. My only criticism of it is that it applies additional tweaks to the image, completely removes color noise, for example. It also makes all film emulsions look more or less the same. Its batch mode is pretty good too.
[1] I believe that Adobe color fuckery is the reason for people complaining about cameras not capable of capturing true film colors. If you strip it away, you get the same uninverted image as what X5 or Creo would give you.
It's not a myth, and it's not a scanner vs camera challenge. It's an inherent difference between line sensors and camera sensors, backed by solid theory. Great engineering behind both technologies, though designed for slightly different purposes. I'd be all over a factory-tuned camera rig employing a non-interpolating sensor. Somebody in this thread mentioned a D800E, sounds like a good starting point for 35mm scanning at least.
You mention resolution as opposed to colour rendition as the target of your test. Genuine question and not trying to be flippant: I just wonder why, if pursuit of ultimate resolution is your goal, why employ a hybrid film setup at all? Wouldn't your Sony Alpha be capable of ultimate, easy to obtain resolution figures if used as a camera and not as a film scanning device? Unless of course your goal is to maximise resolution of historical film material for scanning purposes (where I can see a case for this type of setup).
Now... I am not sure what do you mean by inherent difference between line sensors and camera sensors, backed by solid theory. The underlying sensor tech is the same, with a solid 10+ years of additional R&D in favor of modern camera sensors. If you are referring to different light frequencies filtered out by Bayer vs linear, I couldn't find any specs online, but we can probably analyze what they are and adjust in processing. Perhaps you are referring to the demosaicing artifacts? Just enable the 4-step pixel shift mode that removes those, but TBH I find them negligible.
all. I love shooting film precisely because it's imperfect, and I do not print big. There aren't enough
While your sentiment and thinking is fine and applaudable it somewhat misses some of the points of the thread.So then, if you find those differences negligible, that's another matter altogether. Ultimately what matters is what YOU see and what YOU expect.
I was only commenting as I had thought you were trying to offer your results with a sprinkling of scientific rigour and in an attempt to provide a pretty general result, in which case whether you find the differences negligible or not would be irrelevant, because the differences are there. Let me focus on the last one you mentioned, the demosaicising artifacts.
Off on a tangent - as you I'm sure know already, the raw output of Bayer-filter cameras consists of a so-called Bayer pattern image: an arrangement of colour filters on a square grid of photosensors. In the Bayer arrangement this filter consists of a matrix of repeating 2x2 pixel patterns, one coding for red, one for blue, and two for green. Importantly, each pixel is filtered to record only one of three colors:
The key thing here is that each pixel of the sensor is behind a colour filter and the output is an array of pixel values, each indicating a raw intensity for one of Red, Green or Blue. This arrangement needs an algorithm to estimate for each pixel the colour levels for all colour components, rather than a single component.
This is called `demosaicising'. There are different implementations of this - it is in essence a flavour of signal interpolation. Now compared to the initial, raw intensity images, the reconstructed image is typically accurate in uniform-coloured areas, but will have a significant loss of spatial accuracy and many would agree colour accuracy in complex regions.
But to go back to scanning, dedicated film scanners do not rely on Bayer (or worse, X-trans) pattern matrices and the raw output they produce does not require demosaicing. The so-called `line CCD sensors' in a scanner are, at a very raw level, better than any camera sensor because they do not interpolate and because they use only a single line using the best part of a sharp dedicated lens, so there is no optical distortion or other lens flaws added.
One consequence of the lack of a Bayer array+demosaicing is that when a scanner like the Coolscan 8000/9000 is scanning 90mp, those are 90mp of full color data. Digital camera color data is only 1/4 of the stated resolution due to the above. So even, say, a Fujifilm GFX 100 (a 102mp sensor, 8K$ camera) is only getting 25mp of full color data (and another 25mp of extra green [luminosity] data) from its 100mp of photosites.
You mention a workaround to limit the above: pixel shift. It's a great way to improve your results, but it comes with issues, some of which have already been mentioned above. And you are still left with the limitations of digital camera colour, any lens flaws, having go through the hassle of stitching when scanning 120 or above, plus any other issues inherent with the specific home-made scanning setup used (vibrations of the repro stand? imprecise sensor/film alignment? poor quality/evenness of the retroillumination; and much more). Orange mask removal is another story and so is the lack of IR (infra-red channel) for dust removal in home made DSLR scanning rigs.
Why the wall of text? It's really not aimed directly at you @Steven Lee . I just keep stumbling, on the broader social media, on content trying to sell DSLR scanning as THE thing to do if one wants to really enjoy hybrid photography. There is I think considerable commercial interest in selling overpriced scanning kit, hip 3D printed $500 holders, etc. There are famous bloggers out there with direct commercial interest in DSLR scanning gadgets (you know who they are) and youtube vloggers paid by DSLR scanning companies to 'upgrade' to DSLR scanning. The results is that many beginners, teenagers who just purchased an AE-1 program and want to jump into film are starting to believe that a $4000 DSLR-based setup is what it's going to give them those awesome, professional results they crave. Hint: it is not. Exposure and processing understanding is far more important imo for the film experience (even when film is digitalised, as scanned film can fully preserve some of the characteristics many of us love about film). When exposure and development is fine tuned, a $200 Plustek or a refurbished Coolscan used correctly will provide considerable enjoyment to many people our there without breaking the bank. I really like thinking that film photography is a hobby for everyone who can afford to buy film, a camera, a Paterson tank, and little more
Over and out and thanks for the interesting test!
@Helge I get what you're saying, but I just fail to feel excited about fine detail. Look, a solid 80% of my images have a bit of motion or shake blur or missed focus in them. On top of that, my vision is not what it used to be, so subjectively I am just not drawn to the crispness you're referring to, although I feel it when I see it. As I said earlier, my 24MP Fuji delivered more detail than I ever needed, and I used my 36MP DSLR on the "medium RAW" setting because large files are a PITA to move around via WiFi, and large JPEG files sometimes take a couple of seconds to render full detail in some album apps, including Apple Photos.
Speaking about imperfect film's interpretation of the world, we're looking at highlights compression in CN films, B&W grain texture, etc. I like those. We all have weird preferences, heh?
[1] I believe that Adobe color fuckery is the reason for people complaining about cameras not capable of capturing true film colors. If you strip it away, you get the same uninverted image as what X5 or Creo would give you.
I'm surprised that we've got this far and no-one has considered the obvious and simple comparative test: an inherently non-bayer grayscale only sensor in a mirrorless or DSLR (whether original or converted), with a suitable set of RGB filters swung in place for each sequential exposure (add an IR if you want to remake the wheel with ICE incorporated too) - after all, there's a fair few scanners from back in the not-so-distant past that did this for speed of output (Kodak RF3570 for example), but were limited in outright resolution by the sensors of the early-mid 90's.
And while we're on this, the sensor in a Frontier scanner is a shift & stitch. None of this is new, and the problems were solved (with compromises that we now have the ability to resolve) in the recent past.
I've considered it many years ago, but I don't have 2k EUR lying around to buy the cheapest BW-sensor camera with still acceptable resolution.
Frontiers do not do shift & stitch, they do pixel-shift. Some hi-end flatbeds do shift & stitch.
Which is less money than most of the good scanners + supplies, incipient associated Apple museums etc would cost - and if we reckon that 24-ish mp single-shot seems plenty for most, that opens up a lot of options, at least for getting a functional proof-of-concept system...
Not everything people do gets posted hero on Photrio. "Trichromatic" scans (+ IR channel) with true BW cameras or cameras with sensors striped of CFA, IR filters have been around for a while now. I guess the market is still to small to entice a mass production of a device that would employ an array sensor instead of still much cheaper line sensor.
If you're referring to versions of the Phase One Cultural Heritage system and the like, yes they've been around for a while (not that Phase makes much of the hardware themselves - a lot of it is Cambo) - and the way Phase's market has shifted almost completely towards certain industrial/ institutional/ governmental sectors is very telling about who's actually buying their stuff these days. I also think the bigger problem is that it's not really a resolution issue, but more a colour reproduction and MTFsystem one for the results that most people need/ want - contrast reproduction and colour matter much more than outright pixel count.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?