Me too. Thanks!
That is very gracious of you; thank you. However, as I haven't purchased VueScan (yet), it looks like I can't save anything as DNG. Only JPEG and TIFF are available.Are they any use?but in order to debug your issue, just do the Scan with "Media" set as "image", ie. the former "negative" preview.
With the Proscan 10T @5000dpi this 35mm color frame gives a ~180Mb raw file.
Load the file to some free hosting site and give us the link.
However, as I haven't purchased VueScan (yet), it looks like I can't save anything as DNG. Only JPEG and TIFF are available.Are they any use?
It so happened that the first example from my fresh Fuji Pro 400H roll turned out very well (OK, after I changed to a RAW scan). Unfortunately, the second one didn't.
I just noticed Adobe has a special offer - 7 days' free trial of LightRoom, which would let a person take advantage of the NLP free-trial offer (of 12 images). But the LIghtRoom offer is only free if you remember to cancel it inside three weeks. That's how I understood it, anyway.
I just noticed Adobe has a special offer - 7 days' free trial of LightRoom, which would let a person take advantage of the NLP free-trial offer (of 12 images). But the LIghtRoom offer is only free if you remember to cancel it inside three weeks. That's how I understood it, anyway.
Yes, good point. I redid all the scans in RAW and redid the conversions in VueScan, too, paying attention to what you wrote about the settings. And - it looks like I won't take advantage of your offer, after all. Only one of the positive images looked good, but the rest weren't terrible, either. (There were only six negatives from my homemade 6x12 camera.)TIFF will do. In the last tab "Output" settings then no "Raw DNG format", but if available "Raw file" and no "TIFF file" and if "Raw file" is also deactivated, then the first choice "TIFF" will do.
The idea is to see if the negative looks ok or shows over/under development for instance.
...................................
the point is that if something was done incorrectly when scanning the negative or when developing, then no inversion software will give good results.
It's not absolutely necessary that I stay on Linux; it's just that it's something I'm used to and like - and it's usually cheaper, too. This film scanning stuff is the first application that I find to be not quite easily done, open source, on a Linux machine. Not that it seems to be quite easily done elsewhere. All the plugin business (Negative Lab Pro, Negmaster, Grain2Pixel etc.) seriously confuses me.I missed that.... if you stay in Linux you CAN'T install a recent version of Lightroom. The older lifetime license Lightroom 6 does install and run in Wine, but then you need to have a license, which is no longer sold by Adobe.
You're onto something here. The author of Negative Lab Pro wrote about this on a couple of occasions IIRC. I don't think there's one correct way to deal with this. Adrian's approach has been to treat it as input error and correct for it, others look at it as exaggerated "film character"
These days I enjoy using two color-inversion tools at the same time: NLP and Negmaster. They clearly take different approaches to inverting (Negmaster, for example, is completely immune to film rebate borders), and for more important images I usually want to try both tools to generate the starting point for further tweaking.
Not knowing what software you are using or how you are white balancing, this could be a linear color space thing. A lot of photo software is just not set up to operate properly in linear space. It really creates a lot of problems for this type of manipulation. RAW developers do, which is why they are effective at negating the film base. You might be able to trick PS into working correctly, but I'm not sure.
Or I could be totally wrong
I'm confused here, probably because I know Lightroom pretty well, but not ACR or Photoshop (where I am a true noob). If you do the actions in Photoshop, isn't that where the inversion takes place? So why is ACR involved?
Do you have a link for the source for any of these actions?
It's pretty clear in this example. Whatever point you white balance on throws the rest of the image off in ways that no linear adjustment can correct. You need a curve to correct this issue. You could derive that curve by sampling each step on the wedge and getting a set of points which would bring the entire image into alignment, which is I believe the process that Adrian Bacon has described.
You're onto something here. The author of Negative Lab Pro wrote about this on a couple of occasions IIRC. I don't think there's one correct way to deal with this. Adrian's approach has been to treat it as input error and correct for it, others look at it as exaggerated "film character"
These days I enjoy using two color-inversion tools at the same time: NLP and Negmaster. They clearly take different approaches to inverting (Negmaster, for example, is completely immune to film rebate borders), and for more important images I usually want to try both tools to generate the starting point for further tweaking.
This has been driving me a little nuts, because as you pointed out, if the response curves are parallel on the film, they should be parallel when you scan them as well, but they clearly aren't. Additionally, using RGB lights to scan fixes this problem.
I can't say I agree with that. Look at the curves for the Portra 400 data sheet. They're status M, and very clearly do not have the same gamma, which means they are not linear, even with status M. Just collapse them down on top of each other and they diverge quite a lot. If they were really linear, they wouldn't do that.
Interestingly enough, I came across this old thread that discussed this very issue: https://www.photrio.com/forum/threa...teristic-curve-in-negative-films.155654/print. It's somewhat confusing, because the blue curve definitely has a higher contrast gamma than red and green in portra, the latter two being relatively parallel for most of the exposure range, but since there's no magic in RA-4 paper, the 3 channels must be parallel when projected by an enlarger. I always chalked this up to the fact that status M density is different from the printing density of RA-4 paper, but Photo Engineer seems to be saying that the steeper slope of the blue layer is actually a way of accounting for the unwanted sensitivity to blue light in the red and green layers, similar to the way the orange mask works. So the aggregate effect of white light passing through all of the layers of the film results in parallel color channels. I'm guessing that on top of that, those lines are also only going to be parallel in a specific range of wavelengths which avoid the non-linear portions of the dye exposure curves, as the author of the linked paper found.
Either way, your method of sampling the film at a range of exposures and generating a best-fit curve is going to be effective at correcting for any channel gamma issues.
I'm going to get my hands on a copy of The Principles of Color Photography and try to absorb it. The details of how all of this comes together is really fascinating.
the only thing that matters is what it looks like to the scanner with the light source you’re using. There’s not much to be gained by trying to model what RA-4 paper does because your scanner is not RA-4 paper.
the control strips you posted earlier look the way they do because you didn’t linearize the three channels before attempting to white balance. You can calculate the gamma of each channel using the HD and LD patches on the control strip. Apply the calculated gamma for each respective channel then white balance using the HD patch as your mid-point. Believe it or not, the rest of the patches then line up, your film base plus fog goes black, and the patches from max density to min density go neutral gray. It really is that simple.
The strategy the author of the paper is pursuing, and one that I've also attempted by using RGB LEDs, is to match the spectral response of RA-4 paper as closely as possible, which should eliminate the need to linearize the channels. It actually seems to work pretty well, from the scans I've done with RGB lights, but I have yet to try it with a test strip, which would be an interesting demonstration.
NOW we’re talking!I don't know if Silkypix have been mentioned but I could not find it when I did a search in this thread.
I have not yet tested it but it have a Negative film inversion tool which appears to be working pretty ok.
Silkypix have been around for ages although very focused on the Asian market.
It's developed by the same ISL which have done the forking/spin off for the Nikon Capture/View/Studio NX after NIK got picked up by Google a few years ago.
https://silkypix.isl.co.jp/en/how-to/function/film-photos-to-digital-data/
The screenshots from that Silkypix program aren't encouraging (quite obvious casts on all of the inverted images).
Good luck! I spent a long time today reactivating my old copy of Adobe Photoshop CS6 with the hope that I could use one of the alternatives, and cancel my $10 monthly Adobe fee. Unfortunately it looks like none of them work with CS6 so the monthly wallet drain will continue...
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?