I have the Epson V800 scanner, and the scanning setup looks to be identical to what I have, so I'm going to give colour negative scanning a go.
i like too this way
I use Negative Lab Pro and Negmaster with success. Just give it a look.
The exact same results can be had by skipping that step and adjusting the black and white points of the RGB curves because the mask is in the histogram with the information of the image and gets cancelled out automatically by properly setting the black and white points.
I used to use a process similar to alex burke's until I realized that removing the orange mask from the image by inverting the color of the base is an entirely unnecessary step.
The exact same results can be had by skipping that step and adjusting the black and white points of the RGB curves because the mask is in the histogram with the information of the image and gets cancelled out automatically by properly setting the black and white points.
How do you set the black and white points when you scan? Can you scan flat and set them in your post-processing program also?
When inverting manually I find it best to change the scanning software's settings so that it makes the fewest changes possible to the information coming out of the scan head. I think that's what you're referring to when you say scanning flat. I want as many of the decisions regarding color to be made manually by me at the post processing phase as possible.
I use the raw unprocessed 48bit HDRi data from the scanner with all the settings that could alter the information turned off. I use silverfast to produce these files which means I also have to be certain to turn off negafix which will apply color tranformations even to the 48bit HDRi raw.
So if you're setting the black and white points during the scanning phase they should be at their default positions. The black point should be set as low as possible and the white point should be set as high as possible so that none of the information is clipped. Or, if possible, it would be even better to turn off the black and white point settings completely.
For consistency, what you want is to scan flat, and most importantly: scan the same time, every time. This means that if you set black & white points during scanning, they should always be set at the exact same points. I circumvent this by scanning as shown in the video, which effectively disables all white/black-point setting during the scan process. This means I get the same scan every time regardless of what kind of film is on the platen. This gives me a consistent starting point for adjustments, and it means that if I work out the adjustment for one roll of film, I can then duplicate the exact same adjustment to all other film of the same type that's processed in the same way and get the same color balance every time. By contrast, if you manipulate contrast or color balance during the scanning process itself (and/or let the scanning software automatically determine something), you'll get a different result every time. The result may still be perfectly satisfactory, but it'll be difficult or even impossible to get consistent results from frame to frame and roll to roll.
This is something I've brought up a few times on the forum, also because the issue of "how do I get those damn colors to come out OK on my C41 scans" keeps popping up (accompanied with much pulling of hair and gnashing of teeth). When it comes to getting consistent colors (frame to frame, film to film), I find that manually doing the inversion and color balancing based on a raw/positive scan of the film works best - at least for me.
About a year ago, I wrote a blog about it, which I've linked to from time to time on the forum: https://tinker.koraks.nl/photography/flipped-doing-color-negative-inversions-manually/
Recently, @Andrew O'Neill (re?)kindled an interest in color negative film and has done a few videos on it. In one particular thread, we got to talk about the inversion/color balancing process, with an eye on comparing different types of film and different exposures on the same roll: https://www.photrio.com/forum/threads/first-time-using-120-gold-200-portra-160-ektar-100.210141 Andrew suggested I do a video on the scanning and manual inversion process - and while I'm more of a writer than a vlogger, I caved...
So here's the video version of pretty much that same piece, where I show how I use Epson Scan software (which came with my old 4990) to make a 'raw' scan, and then use GIMP to invert and color balance some color negatives.
While this approach works very well for me, I admit it's not perfect, neither is it fool-proof, and it still relies in subjective eyeballing to get everything right. I'm sure there are also many improvements possible to this workflow - I'd like to invite anyone to offer them as suggestions, to point out the aspects that don't work particularly well in how I've shown it here. In short, feel free to discuss, comment and make your own spin on this.
Someone once said you get more data the first way by setting point before the scan. But I have never been able to confirm that it makes a difference doing it before the scan or in the editing program. Do you know?
Wow, great video @koraks. I don't use Epsonscan but this is great. Nicely presented too, well explained and clear - this is the youtube content I like. More of this please.
I'm not sure, but I'd expect that the person you cited is right. But we need to differentiate between having 'more data' vs 'more information'. The reason is that in principle, you're acquiring a 16-bit per pixel image stream from the scanner in both scenarios. So the total amount of data is the same in both cases. But this doesn't mean that both streams contain the same amount of actual information. In other words: it's possible that in one case, part of the pixel data is effectively padded with ones or zeros, resulting in a lower effective bit depth than 16. Or, put yet differently: while 16 bits allows for distinction of 65535 brightness levels, a padded 16-bit data stream may be able to differentiate significantly less. Whether this actually happens, I don't know, and it's possible that both answers are true, to an extent, at the same time.
Consider that the scanner can scan a dynamic range of let's say 4.0logD (I think Epson claims something like that). Setting the black and white points to the extremes during scanning would logically map this entire range to the 65535 levels the 16-bit analog-to-digital converter (ADC) is capable of resolving (give or take a little noise). If you instead truncate the range by shifting the white and/or black point, there are two possible ways in which this truncation translates into the digital domain. Either, the same 16 bit sample is acquired, but this time the highest and/or lowest values are simply lopped off, the remaining center bit is multiplied so it again occupies the full 16 bit range, and the intermediate values of this integer multiplication are simply padded with zeros (these would be the leas significant bits). The resulting data would nominally be 16 bits per pixel, but in reality, it would hold less information. In this scenario, it's still possible that the scanning software doesn't actually pad the data with zeroes, but instead interpolates intermediate values more or less intelligently. This would result in an in-between situation where 16 bits of information seems to be present, but closer analysis would still demonstrate that some of the data are effectively made up.
Alternatively, when you set the software to scan only a part of the full dynamic range, the scanner could use analog gain (there are a few ways for accomplishing this) to actual amplify the signal before it's fed into the ADC. An analog offset might also be included so that both the adjusted black point as well as the white point are taken into consideration already in analog signal conditioning. This would theoretically result in 16 'real' bits of image information. The penalty is obviously more noise, which would in fact reduce the amount of real information a little - but it may (probably will) still be better (i.e. more effective information) than in the digital-padding scenario above.
I haven't looked into it deeply, but I suspect both mechanisms are used at the same time - both analog and digital gain (probably with some 'smart' padding) are used if you adjust the white and black points. This means that you gain some real information compared to the 'flat scan' (of the entire scannable density range) and then a dramatic curve adjustment. The penalty is, obviously, that any such black & white point adjustment will have to account for the actual film densities you're trying to scan, and this automatically means that you'll sacrifice some degree of consistency. It's still possible to strike a compromise and set the black & white points in such a way that they'll plausibly cover the density range of any media you're trying to scan (let's say, super-high contrast maskless Phoenix 200 film all the way to rather soft ECN-developed Kodak Vision3 film). I've not explored this option, because I find that the actual image data resulting from a straight/flat scan as I demonstrate in the video is amply sufficient to get good positives from most of the film I've handled so far. But there may be room for improvement, based on your argument.
Thanks, that means a lot to me; much appreciated! I'll keep it in mind!
That's possible @Alan Edward Klein ; I've never tested it specifically. It wouldn't surprise me if the scan really is the same and that only software like VueScan really manages to influence hardware gain.
Raising the gain will distort just like audio speakers driven too high with volume. Why would Epson set the gain less than its best maximum with no distortion? That's one of their advertising elements, giving the dMax value, which provides the best details and penetration through dense areas of the film.
In any case, wasn't the speed of the scanner that affected the revised dMax? Slower would allow more data in shadow areas, or so the claim went.
Why not use a color head to illuminate the negative and digitize with a camera? Instead of correcting curves in software you can use the color head to correct the image. Isn't this the way a Fuji scanner works?
I'm not sure that epson software is changing the hardware gain during scanning. I'm pretty certain that silverfast is not in my system. But I believe that vuescan claims to be able to do so, or at least offers a software option that appears it could be doing something like that.
All of this of course could be dependent on the scanner you're using. Maybe some scanners are able to alter the hardware gain (or even exposure) and some are not.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?