An alternative to Negative Lab Pro and Lr has to exist (C-41 reversal and orange mask removal)?!

grat

Member
Joined
May 8, 2020
Messages
2,044
Location
Gainesville, FL
Format
Multi Format
I find darktable with the (new) module "negadoctor" to be fairly skilled at doing negative inversions, with considerable ability to tweak the results to your liking. Since it's part of darktable, those settings can be exported, and applied to other negatives in your library for a faster workflow.
 
Joined
Mar 3, 2011
Messages
1,530
Location
Maine!
Format
Medium Format

So convert the PSDs to TIFFs or another universally read format. Unless they're vastly complex photo montages that you wish to edit continuously until the end of time, this is an easy solution. Yes, you can license a copy of Affinity Photo (which is really just a perpetual rental if you read the TOS). But we're talking about Adobe software here, not Affinity. Since neither software is expensive, and you aren't locked into using PSDs (LR does not at all), and only LRCC, not classic or PS, relies on the cloud, I really don't see what the controversy is. Did you know that my Ilford paper EXPIRES even though I paid money for it and OWN it?
 

removedacct2

Member
Joined
May 26, 2018
Messages
366

yes, but inside the limits of what can be done, I found ColorPerfect more adequate in order to get to a plausible positive of the original scene.


I know, I wanted since long to do at a calibration but was lazy. In fact I ordered a target earlier this week.

This brings the way how ColorPerfect works. As I have mentioned it to Old Gregg in the above comment, the key initial step with CP is to pick an emulsion. A long list is build-in inside the plugin. BUT, some are extinct and newer ones are missing. I did overlook this when I tried CP last year and I left after the trial because there were some weird colour casts with Lomography films that I couldn't get rid of. I had negatives that were un-fixable. Only recently by reading through more doc, I found that the characteristics of an undocumented film has to be approximated as much as possible. CP interface makes it possible to get relatively close by modifications of the colorometric parameters from the less worse emulsion choosed in the list. When I got this, all of a sudden CP became excellent and very powerful. And that's also why I have ordered a target, to try some basic calibration. Btw, my scanner is calibrated against an it8 target.
That said I don't expect miracles, and development in small hand tanks induces more variations, but it will be good to have references. Film emulsions change over time so this must be redone sometime too.

relatively to this aspect of film, I believe the flaw of NLP is that it makes a general assumption a priori, of what is supposed to be a "nice" picture, and applies some sets of more or less averaged/ more or less ponderated calculations on the data of the negative. It then provides a menu of tweaks supposed to please different tastes. So unless the negative is really really wonky, you get anyway a "nice picture". But it may not be what you saw when you took the shot.


I did something similar, but only on couple frames of a film, with Gimp, ... tedious... So this is really a good idea of yours.

I found about your Grain2Pixel site before Christmas, but it requires Photoshop 19 at least. I am on Linux and BSD, and was using NLP on LR6 running on Wine emulation. I don't use LR at all, it was just for running the NLP plugin... For ColorPerfect I did run PS C6 on Wine too, buggy but for the plugin it's ok. I have bought CP now because I discovered this small german raster and vector editor Photoline, a ~30mb download, with runs snappy and flawlessly on Wine.
In order to play with your G2P plugin I must pull a Windows 10 + Photoshop >= 19 inside Virtualbox.
And there's the Adobe subscription. I don't need PS, I use Gimp, Imagemagick, dcraw, Rawtherapee. Not going to pay a subscription just for a plugin on top of the infernal Adobe bloat.

you Roll Analysis feature sounds great, otherwise if it was only a one-button all-around solution then it falls under the flaws that you mention about emulsion characteristics

you do a standalone, or a plugin which can run on Photoline, and I give you 150€ (If it performs better than CP which I use now)
 

grat

Member
Joined
May 8, 2020
Messages
2,044
Location
Gainesville, FL
Format
Multi Format

So does your food. Your car wears out too, but what all that has to do with the concept of software rental leaving you with potentially inaccessible work, I have no idea.

To make your analogy more legitimate, your paper is worthless without developer. You have to continuously buy new developer every month, or your perfectly usable Ilford paper is worthless-- and then the Ilford factory catches fire, or the chemicals they use get outlawed, and suddenly you're left with a perfectly good stack of paper that hasn't expired, but you can't use. To take the analogy to a slightly sillier conclusion, your negatives only work with Ilford paper, so you can't print them on any other paper.

It's still a weak analogy.

TIFF unfortunately lacks things like history, layers (well, it has the ability to do multiple "layers", but not in the way PhotoShop and other tools do). When I save an .aphoto file with Affinity, it's got a lossless copy of my work, with adjustment layers, mask layers, may have one or more snapshots so I can test multiple approaches, and has custom-defined export "slices" so I can export multiple versions of the file at once.

Do I *need* all that? Not at all. But it's nice to have, and as long as I've got a computer than run the last version of Affinity I downloaded, I can open and edit those files, without having to recreate all my work. Serif could go out of business tomorrow, but my creative work is safe. And if you think your rented version of Adobe's software (including Classic) doesn't rely on the cloud, try running 31 days without an internet connection. You absolutely must check out a license once every 30 days (for monthly) or 99 days (annually) to continue using the software.

Or change your clock forward/backwards an hour while disconnected from the internet.

https://muenchworkshops.com/blog/using-adobe-lightroom-classic-and-photoshop-cc-offline
 

Huss

Member
Joined
Feb 11, 2016
Messages
9,058
Location
Hermosa Beach, CA
Format
Multi Format

I don't see a golden hour in your pics. I see a purple hour. Basically purple in all the images you favour All the NLP images look much better, and you can adjust the exposure and you can adjust the luminosity to bring it down if you want to show more of that golden light. For some reason you are fine with having to adjust your purple pics, but have a problem having to adjust NLP pics.

NLP makes it very very easy to get the result you want. Either by using the presets, or by using the presets and then making slight adjustments. It seems that you choose to show NLP in the worst possible way to prove a point. Kinda like claiming a lens is unsharp ignoring the fact that it hasn't been focused correctly.

Anyway, use whatever you want. Be happy with what makes you happy.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format

you must have a crazy old version of Photoshop. I never save things in psd format. It’s always in TIFF, and layers, layer masks, and all that stuff works fine with tiffs. LR even reads and correctly renders the tiff file.
 

grat

Member
Joined
May 8, 2020
Messages
2,044
Location
Gainesville, FL
Format
Multi Format
you must have a crazy old version of Photoshop. I never save things in psd format. It’s always in TIFF, and layers, layer masks, and all that stuff works fine with tiffs. LR even reads and correctly renders the tiff file.

Interesting. According to Adobe's documents, while TIFF can store (Via Photoshop extensions to TIFF) XMP metadata, there's no mention of other image resources, masks, layers, etc. like there is in their description of the PSD/PSB file format. XMP is indeed newer than my version of PS, so it's probable that at least some of the extra data is stored there (and used by Lightroom as well).

See https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577409_pgfId-1030196

If so, that's a very nice step by Adobe into the realm of non-proprietary lock-in. But I'm still not interested in dealing with a license that may or may not be mine, depending on the whim of someone within Adobe.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
If so, that's a very nice step by Adobe into the realm of non-proprietary lock-in. But I'm still not interested in dealing with a license that may or may not be mine, depending on the whim of someone within Adobe.

well, you’re the only one who can make that determination for yourself. There are plenty of alternatives, though they aren’t the defacto standard. For personal use, that’s fine, but for somebody like me who distributes files to lots of people, that doesn’t really work.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format

Yep. Psd/psb format was before tiff really had any standardization in terms of usage. Adobe has since realized that having a proprietary container format doesn’t really bring any benefits. They still support psd files, but only for legacy compatibility. All their new stuff is more centered around tiff. Even DNG files are technically tiff files and can be read with any tiff library.
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format

When I'm talking about color spaces what I'm referring to is the matrix math that allows us to convert a color on one device (or context) into a color on another device. We can't sensibly talk about color in a digital context without using color spaces. If you have a color that is defined by a set of numbers, IE R135, B234, G28, those numbers are coordinates in a color space, and every digital system which understands color is going to use a coordinate system to work with it. Gamut is a useful concept for telling us whether the colors in one space exist within another, but it only comes into play if the space we're converting from is larger than the space we are converting to.

All cameras, along with any other optical hardware that records color, do in fact have a native colorspace. In fact, if you look at the metadata of a RAW file, you will find a 3x3 matrix which provides the coordinates that allow us to transform the camera color space to a known color space. This is because the red, green and blue colors your camera is sensitive to are defined by the specific wavelengths of light that the filters over each photo-site lets through. So let's say for example that your camera's green sensors allow through the most light at 535nm. How does the Raw converter know what color that is? In the case of my camera, which has a 16 bit ADC, but only uses 14 bits of that range, all it sees is a number between 0 and 65,536. Your monitor could just guess at a green, but what if the green pixels in your monitor emit at 545nm?

The key to matching those colors is know what the coordinates they occupy in a defined color space. So for instance, in the CIE 1931 color space, light emitted at 535nm has a specific XYZ coordinate that you can calculate. When you read the RAW file, the matrix stored in the file tells the reader "this is how you determine the coordinate of the color I use as my green primary". So when you view a RAW file, the converter is using the matrix stored in the file to convert from the camera's native color space to a known color space. It converts the integer value recorded by the photo site, which is just a brightness value as registered by the ADC (ie 15,346), refers to the bayer grid reference in the camera metadata to determine whether it is a red, green or blue photo site, and then uses the color matrix in the metadata to convert that number to an RGB coordinate in a known color space. Once the color data is in a known color space, it can then convert from that color space to the one your RAW viewer is configured to use, such as sRGB, AdobeRGB, etc. That color information is then further transformed into your monitor's color space, your printer's color space, etc, etc using the color profiles you have chosen to use for that conversion.

The point that I make in my article, which is a bit nonintuitive, is that you can actually think of the dyes in a color negative as defining the primaries within a color space, and view the conversion to your camera as a color space transformation. So the CIE 1931 coordinate of the peak wavelength of light that each dye layer allows to pass through it exists within your camera's gamut (because modern digital cameras gamuts exceed that of negative film), which means that when you scan film by any means, you are performing a color space conversion. The same goes for printing a negative on paper: you can represent that transformation mathematically, via color spaces if you want to. The problem that you run into, however, is that inverting a negative relies on the relationship between the CMY layers in the negative: specifically the fact that log of the exposure curves of the light that passes through the dye layers is parallel. However, if you take those parallel curves, and run the matrix math to convert them into another color space, they are no longer parallel. What this means is that if you white balance at a single point, other areas of the image will not be white balanced.

This is a super useful reference for understanding how a RAW file works, and goes over the color space transformation math in much more details: https://www.odelama.com/photo/Developing-a-RAW-Photo-by-hand/
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format

So I think inversion is a bit of a red herring, since it's not the actual inversion that I'm concerned with, but rather the negation of the orange mask (and eliminating the cross-channel leakage in the magenta and cyan layers), which can happen pre-inversion.

You are right that the log exposure curves are not absolutely parallel, but the reason the orange mask can be cancelled out with a gain adjustment is that the curves are mostly parallel for most of their exposure range. Color negatives can often display a magenta or cyan tinge if extremely under or over-exposed, because you get into the areas of the the exposure curve where they're not even close to parallel. But for the sake of making the math simpler, we can use a imaginary brand of film that has very parallel exposure curves, so that they look like the ones in my article. In which case, to get them to line up, all you need to do is adjust the gain of two of the channels.



Here's the trick, in order to convert colors in one space to another, you need to do matrix multiplication. So you take the XYZ coordinates in the original space and multiple by the transform matrix to get the coordinates in the destination space. What ends up happening is that the graph of the log exposure curves of the colors in the destination space, once you've converted everything back, because you've taken points in a 2D space, mapped them into a 3D space, then performed a geometric transform of that 3D space, and THEN mapped them back into a 3D space. When that happens, the graph of the log exposure curves are no longer parallel. In order to make them parallel, you need to alter the curves to match, which in adjustment terms means applying a gamma curve to each. I actually went through at one point, converted a simplified exposure curve to XYZ coordinates in an imaginary color space, performed the matrix math to sRGB, and then back to a log exposure curve and the curves were no longer even remotely parallel. Which means that to get them to overlap properly, I would need to measure a grey at every point along the exposure curve and bring the curve back into alignment. This is exactly what it looks like NLP does when it inverts and correct images.



This is what happens when you render a RAW file in Lightroom or Photoshop. You're performing matrix math to get them into a known color space, and as a result you've changed the relationship that would have let you correct for the color mask by performing a gain adjustment. I believe this is also what happens when you capture the light passing through the negative on an RGB sensor, compounded by the fact that the sensor's light response curve is probably not the same as RA4 paper, meaning that again the relationship between the colors is altered in a way that can't be corrected by a simple white balance adjustment.

Anyway, I could be totally off base here, and I've love to hear whatever input anyone has, but this is what I've managed to work out. Additionally, when I do compose images with three separate color exposures, I can't make the inversion work perfectly with a simple gain adjustment on the green and blue channels. No need to adjust the shadows or highlights, and no color shifts or weirdness.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format

They diverge if you didn't linearize them first. @pwadoc isn't completely wrong, but he's missing a couple of steps and not using the term 'log' correctly.

Lets start with log: what you mean is gamma. Log is a way to measure contrast, to say something is 'log' what that inherently means is the gamma roughly follows a specific gamma value. What that gamma is, is something else. It could be 'log gamma' (which does have a definition), but it can also be pretty much any other gamma value. At any rate, film emulsion contrast does not have log gamma.

In color negative film, each color channel is not the same gamma, and they most certainly are not 'linear gamma' (otherwise known as gamma 1.0). If you want to do color transforms, you need to get it linear gamma first, otherwise things go haywire and your colors get all weird. How you get to linear gamma can be done by characterizing the gamma curve of each color channel as @pwadoc describes, or, you could just simply measure the contrast along the straight-line portion of each curve and use that to make a curve that straightens it back out to gamma 1.0 from whatever it was before. Once you've done that, you can then white balance it and it will by and large behave just like it would if it were actual raw samples from a digital camera.

In terms of colorspaces, gamuts, etc... meh... digital cameras have a native color response that can be mapped to actual CIE XYZ values, and the color matrix used to do that isn't any different than any other colorspace to XYZ matrix. The big difference, is the camera's matrix is specific to that sensor. You can think of it as a color space if you really want to, once you know what the matrix is, it isn't really handled any differently than any other colorspace in that you do the RGB to XYZ matrix which gets you into the CIE XYZ connection color space, then you do the XYZ to RGB matrix to get you into your destination color space (like ProPhoto or sRGB, or ACES). It's kind of a case of how you pronounce potato or tomato.

@pwadoc has an explanation that isn't particularly clear, and is a little long winded, but he isn't completely wrong either. At least he's trying to understand it.
 

penguin_man

Member
Joined
Jan 27, 2018
Messages
2
Location
Indonesia
Format
Analog
You cant just take a picture of a color negative with a digital camera and expect it to invert correctly

Color negative and RA-4 Color paper is one cohesive imaging chain that needs each other to function properly. You cant just take one away and expect it to work.

1. Color negative transforms the scene’s information you’re photographing into [Numbers] in the form of dyes. Some dyes that are of low accuracy a.k.a impure, turns orange, creating the [Orange Mask]
2. RA4 paper sees those [Numbers] ignores the [Orange Mask] and converts it into color pigments that the eye can see. In layman’s terms: in every RA4 Paper exist a secret and proprietary LUT that only Kodak knows.

You aren’t supposed to look a negative with the naked eye nor with a digital camera. What you see in a negative with the naked eye is only a pseudo-image. There’s so much color contamination going on, that when taken as-is, the gamma curves of the R,G,B will get all wonky. Leading to the phenomenon @pwadoc describes.

There are workarounds for this: with algorithms that sample the color’s scene to correct each channels gamma curve [as is the case of NLP, grain2pixel], you can also create per film stock calibration using color charts [as is the case with @Adrian Bacon approach].
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
You have written code to deal with RAW files, isn't RAW data linear already?

The raw sensor data has a linear response to light. yes. When you take a picture of a negative with that sensor, yes, the sensor data is linear. What is not linear is what you took a picture of, the negative. The pictorial information captured by the negative is not linear at digitization. If you want your colors to be reasonably correct (no matter what light source you used to digitize it with) you have to linearize that pictorial information, or have it pretty close to linear. Failure to do so will result in an image that is not correct. The whole 'give me a raw tiff of the scan' practice is solely so you don't have to remove the gamma that colorspace applied before trying to remove the gamma the negative has. It's purely to make life easier and reduce complexity.

Isn't that by design? That's what defines the "look" of an emulsion, and slight contrast variations between channels should not be normalized.

Yes and no. The look is designed to be a slight variation on a standardized linearization defined by RA-4 paper. It's relative that standard. This is why I said reasonably linear. It can match it exactly, or it can very slightly deviate to give a certain look, but at the end of the day, you still have to apply the linearization of the pictorial information. In my own scanning code I used to completely remove all differences between emulsions. It was a lot of work and gave a very neutral look no matter what you shot. Since then, I created a "digital paper" that takes the average contrast of all the emulsions that I've done that exercise for and uses that average for its linearization, and that is used for all C-41. Now the only thing I work out is what the white balance is for a whibal card lit with full spectrum light at the same color temperature as what the film is designed for. As long as my development process is in control (and you bet I use control strips) and I don't make any big changes to the rest of the scanning chain, right out of the box my white balance and color is pretty close.

* Colorspace transformations have nothing to do with negative digitization and color inversion. These two concepts are orthogonal.

You are incorrect. Colorspace transformations have everything to do with it. This is how you know what color your RGB values are. If you digitized a negative, getting it to a positive image is a colorspace transformation. It's not necessarily one that follows matrix math, but it is one nonetheless.

* Having 3 captures with 3 light sources vs a single one with a high-quality light source and a color filter array can produce the same color.

This is up for debate. It may produce similar color, but using a monochrome sensor with narrow wavelength LEDs centered on the frequencies that RA-4 paper is most sensitive to should in theory produce cleaner color. The problem with CFA's is that the each color filter is not perfect and bleeds small amounts of all the other wavelengths through. You can see this by shining a green narrow band LED into a camera, taking a picture of it, and looking at the raw sensor values where the green LED light is hitting the red and blue sensels. They are not 0, and in a perfect world, they would be. You can eliminate that by simply using a monochrome sensor and making three different exposures. They wouldn't contain bleed from other colors because quite simply, the other colors just aren't there since the LED isn't emitting those other colors.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format

If it doesn't make sense to you, it just means you haven't reached that level of understanding yet. Nothing wrong with that. We all have to start somewhere. That being said, between you and me, I'm probably a bit more informed than you are, given that I've actually written code that gets a raw image of a negative and turns it into a positive image that Lightroom knows what to do with. If you haven't already, it would probably be helpful to read Adobe's DNG spec, specifically the section dealing with getting raw camera data into XYZ color space.

All that being said, I never said taking a photo is a colorspace transformation. Getting the raw sensor data of an image that you took into something that your eyes recognize is though. Scanning color negative film just has the added complexity of the analog colorspace that the film emulsion is doing layered on top of it.

One other note, if you're using software like LR or PS, you're working in a color managed environment. You might have a raw scan, but the second LR applies the first RGB to XYZ transform (as defined in the DNG spec), to the raw pixel data, you ain't raw any more. Everything you're doing from that point forward is in a color managed environment that follows all the rules of the given colorspace. The same applies to pretty much every raw processor. The whole point of that matrix is to get you from raw unmanaged color to a managed colorspace, and every single one of them does that extremely early in the chain if it's not the first thing it does.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format

Well, WRT #1, @pwadoc is missing a couple of things. What he's seeing is probably due to the fact that the curves aren't parallel, and won't be until they've been linearized. Like I said before, there's a couple of steps in there that are missing.

For number two... how do I explain this is simple terms.... There is nothing that says your color primaries have to be red, green, or blue. In the case of both printed paper and color negative film, the color primaries are cyan, magenta, and yellow. In the negative emulsion, a specific combination of cyan, magenta, and yellow densities equals a really specific XYZ coordinate in CIE XYZ color space. So when you take a picture on film, you are making an analog representation of XYZ coordinates using Cyan, Magenta, and Yellow dyes as your primaries. The actual color of those dyes is closely matched to the Red, Green, and Blue color response of RA-4 paper, which in turn, creates Cyan, Magenta, and Yellow dies on the paper when developed, thus giving us a positive reflected image.

Now lets say you digitized that image on the film negative with a DSLR and pulled the image into your favorite raw processor. The raw processing software will proceed to apply it's RGB to XYZ transform to the raw samples, then apply an XYZ to RGB transform to sRGB and display it so that you can look at it. What you see is what you would see if you held the negative up to light. You now have a color managed digital RGB image of an image that is using cyan, magenta, and yellow as its primaries. That same combination of Cyan, Magenta, and Yellow densities still means exactly the same XYZ coordinates as before, but now it's being represented as Red, Green, and Blue samples that has been placed in the XYZ color space by the raw processor using the XYZ transform it would use for that camera. Follow me so far? So how do we get to an RGB positive image? Most people just invert it because that's the intuitive thing to do. We have a very accurate representation of what the actual dye colors are and what the densities are because our digital camera is very accurate, but simply inverting it won't get us there. The cyan in the negative will give us a red when we invert, but unless it happens to be exactly represented as the exact opposite hue, brightness, and saturation as the red color it was exposed to when the picture was taken on the film (it's not, it's tuned to the RA-4 paper response), it won't give us an accurate representation of what was actually captured. The only way to get there is to now do a color space transform that maps the resulting RGB values to the actual XYZ coordinates that they represent. This is why color space transforms have everything to do with scanning film. Does that make sense?
 
Last edited:

radiant

Member
Joined
Aug 18, 2019
Messages
2,135
Location
Europe
Format
Hybrid
Is there any guesses what algoritms NLP uses?

As a programmer I see these conversion softwares as LUT converters. Maybe and probably the LUT endpoints are adjusted based on some analysis, but still it is a LUT. You change color to another color based on pre-defined table.

Or are there some local algortims used too like for example kernelling?
 

pwadoc

Member
Joined
Mar 12, 2019
Messages
98
Location
Brooklyn
Format
Multi Format

Adrian is explaining this a lot more simply and accurately than I did. My explanation is definitely overcomplicating things. I'm thinking about the color spaces as a 3D objects being deformed by matrix multiplication. This visualization was helpful for me:

https://www.geogebra.org/m/BvxP2mAz

The color space is represented by the cube. As you change the numbers in the matrix, the cube deforms. Now imagine the exposure curves connect points inside the cube. When the cube deforms, the curves lose their original relation to each other. If we were just rotating or enlarging the cube, the lines would retain their relation to each other, but because the color space matrix transformations deform the space, any relationship defined by its relation to the original space will no longer hold.
 

JWMster

Member
Joined
Jan 31, 2017
Messages
1,160
Location
Annapolis, MD
Format
Multi Format
FWIW, a recently announced alternative for LR and PS: https://negmaster.com/product/negmaster-full-version/ that claims to have no impact on your sliders by flipping the histogram (or words - literally "words - to that effect). I know nothing of this other than it's (now) out there. I've used NLP at this point simply for B&W negative conversions for which I can say ".... uh.... okay..." but not much more. Fairly, the problems we're trying to solve is on orange masks... right? So B&W is probably NOT a good measure. I'll be looking at NLP now that I've developed more C41 and E6... to see what it does there. Not sure whether it does E6, but if it does... I'll be happy given that I don't like scanning's limits on sliders in my current DSLR approach. So I think I'll give this thing a whirl... NEGsets. Good to have an alternative!
 

removedacct2

Member
Joined
May 26, 2018
Messages
366

these "purple" images ARE NOT the final images. I didn't write that. These are a first output of CP. They are wrong, but close to true, contrary to the overexposed and washed NLP first output. I'll show what these purple pictures are almost there.

golden hour:
my english is bad, but whatever it is in english, it's about the moment of the day when the sun is getting down, still up over the azimuth but low enough that, when the sky is clear, it produces a more or less golden shining.
for instance, in pictures:
before Christmans I took an old camera I wanted to test to a place nearby. It was I think the 23 dec. and around 14:00/14:30.
The astronomical map of the sun movements this day, time and place, projected over a satelite view (oriented North) shows the point where the sun rises, it's path from East to West (yellow arc) and the point where it sets. The rays show the direction of main illumination, the small arrow in the centre of the circle is pointed at the area where I was taking pictures from. Beside the sun path, its elevation matters, and this time of the year, around winter solstice it is very low:



here a photo pointing at the brigde seen at the top of the satelite view. Not so noticeable bu there, a specific illumination on some poles, and the red, a bit of yellow reflection on the foreground snow. Luminosity of the landscape is getting low.




now a photo pointing a bit to the shore on the left of this one. There's the slightly golden light on trees to the right and the top of the boat:




now I turn 180, my back to the boat, and I take a picture the other end of the river, South-West, so I am shooting to the sun, in order to check flare effects of this lens.
The negative is burnt by the sun:



there are different interpretation in positive inversion. For instance this:



or this:




so, by "golden hour" i meant this position of the sun.
...

now the very purple picture of the cathedrals and towers were taken at a similar moment. It was the 28 dec. 2019 (last Christmas) around 14:00 to 15:00. Vologda is at the same latitude than my place, ~60N.
The map of the sun position and path, on the city's map and the position where I was shooting from:




here i draw the position of the building in the purple photo, relatively to that satellite view:



of course the sun isn't hitting directly like in the case of the picture taken by the river, because there are buildings in the city and the kreml itself that do partially block the most lower rays. But in this purple picture we can see on the white Sofia cathedral the line light/shadow. The square is partially shadowed.

now, to complete the description of the setting, a couple photos from the mobile phone. The days before i was in a city where the weather was quite dark. I arrived in Vologda in the night and then in the day I had this beautiful weather, I took couple shots to sane a friend ("see what nice weather we have today in Vologda!).

it's not much, just a mobile phone, but not too bad, it has Zeiss optics. The square. Almost no clouds, very sunny. in reality the sky was looking deeper blue. Sun is hitting from my back and a bit left, we can see the reflection on the cupulas, telling the direction of the strongest sun beam. The sun is moving to the left (West)




from the end of the square near the small church, we see the strong shine of the sun from the back of the kreml and low on the azimuth (not hanging up in the sky):




so that's the setting. I grabbed a Salyut and couple Mir lenses and started taking pictures like one hour after this. Which means that the "white" NLP rendering is a messed overexposure. Somehow its algorithm may pump up the amount of luminosity and exposure when averaging the measures brightest points of the negative. Whatever, in such case it's wrong.

The purple picture is wrong but not far because the real sky was deep blue darkening, there's a gradation from purple to blue.
i can do it in ColorPerfect panel but I'll show in a generic colour balancing tool, here in Photoline (I run CP inside Photoline). The tool has a preview windows, vertically split, showin the before/after:



I will just push to the Green, away from the Magenta:



that's almost it ... see purple > blue.

I can call the exposure tool also:




in order to get most possible close to what was the real life, I can compare with the photos I took just before and after, but basically something like one of these two, there in the middle, where purple is the CP first try and white the NLP one:






in short: purple was no problem there, it was just about moving away a veil, and the general mood and structure of the picture was there.

But it is not just about these examples. I have many cases where the CP output is closer to reality. Will post a follow-up soon with more negatives.


I bought NLP end March, started using it a lot but not exclusively (i still use Gimp much) April to now. I am familiar. There's something to say about the presets. but one thing at a time.
Since november with winter again I have noticed NLP wasn't so useful, that's when I took archives from last winter,. I even re-scanned some rolls just to be sure. Ran tests in NLP and in the CP trial (with a grid embedded in the photo) and bought myself a CP license. Which meant buy also an editor able to run it, so I opted for Photoline. All together not much. like ~ 120€, but for a good reason.
 

removedacct2

Member
Joined
May 26, 2018
Messages
366
so follow up about comparison NLP (Negative Lab Pro) and CP (Colorperfect). Four negatives, default unmodified output of CP on the left, of NLP on the right. On all this roll there is some color aberrations here and there mostly by some edge, I guess because I wasn't careful with agitation, or chemicals were getting exhausted, but it doesn't affect much the whole image nor the comparison.

this https://yadi.sk/i/eMpXFnS_u_T1Mg
CP needs some correction, but NLP has blown exposure/luminosity, alight bomb






now, less spectacular:

https://yadi.sk/i/rjXBZQ1u-a1VmQ



the NLP is too harsh on the whites and on the colors. First thing that hits: you can't have this level of luminosity in the snow on the ground with this kind of sky. This is a snowy day, a bit foggy, light is dim, that kind of reflection in the snow isn't possible.
Then details, it's clearly truer on the left side, even if not perfect, needs a bit adjustment.




another detail, with CP the footsteps are visible, but very faint in NLP.


one more about the harsh colours in NLP. This: https://yadi.sk/i/19rajbW9OTTd0g





see the yellow and red on the banner, the clothes of the two people, the hard way the black letters and figures on the car reg plate pop off out of a very white background, the colour of the car on the right. It's all overdone in NLP, but I can use CP rendering as is (well if the picture was good or of any interest, which is not the case...)

 

JWMster

Member
Joined
Jan 31, 2017
Messages
1,160
Location
Annapolis, MD
Format
Multi Format
FWIW, I went ahead and bought a copy of the NegMaster software and look forward to giving it a whirl side-by-side with Negative Lab Pro. There's promise in the docs and in the rest - their ambition is to leave image in a colorspace so that the PS controls aren't already pushed to the max simply by the conversion. Having worked manually on DSLR scanning my chromes, that sounded appealing and NLP hasn't really addressed that according to their forums. Yes, NLP put it down as "maybe on the do list". So NM seemed worth a try. Will report. No promise to do so quickly - I'm still catching up on my developing. But in due course...
 

removedacct2

Member
Joined
May 26, 2018
Messages
366

well, apologies, I didn't stress enough that these negative scans I posted are with an Epson V700. OP of the thread was thinking mainly about DSLR scanning, then the thread evolved into opinions about colour inversion solutions in general.
I have posted this comment of mine about quirks of NLP, as a way to illustrate by actual data, that there's no automated solution, as OP seemed to want and seemed to believe NLP does. NLP certainly not does provide a "one-click" automation, as I tried to show, even if of course it's a great tool somehow.
That said all these photos are Epson V700 scans, not camera scans. So no issue with the light source there.

I have done little scanning by camera, with an APS-C Sony A5000 and a Mir-24 lens on an old stripped Durst enlarger stand. My light source is a square roof/wall LED lamp sold at DIY and electricians stores and which has a good distribution/density of LED and a correct white plate, its "warmth" mentioned on the sticker is 3000K, which matches the value computed by the white balance measurement of the Sony.
I have only scanned couple rolls like this and inversion in NLP had nothing noticeable besides its usual harsher rendering as illustrated here. I didn't camera scan negatives with unusual uneven amounts of whites or borderline "golden hours" either.
What I dislike with camera scanning is the cropping. Ideally I will sample the unexposed film on the leader or trailer, and shot exactly the frame without borders, but I haven't found a combo of lens/tubes/bellows resulting in a neat framing. This for 35mm and 120 in 6x9, for 6x6 of course impossible unless the camera has a custom function for defining the picture ratio to square.
 

removedacct2

Member
Joined
May 26, 2018
Messages
366

color inversion algorithms from digital camera scans DO use the "emulsion" characteristics ... this is what they boast in their advertisements "we have now in the new version a gazillion of further fine-tuned camera/sensors (and lenses!) profiles" .... The "emulsion" is the combo sensor+graphic processing embedded in the camera. If it's a scanners scan, then you have to set the Vuescan/Silverfast profile

print to RA4 is a photo-chemical process, inversion in software is a digital process. These are processes of different nature.
Even without getting into this, different films/emulsions do have different rendering, ie. their photosensitivity is different, .ie they use different RGB matrix. Ektar-100, Portra-160, Agfa, fuji, Lucky, etc.

the idea with the way CP works is to try to be hi-fi relatively to the film characteristics, which do indeed produce different renderings, exactly like when you make a RA4 print of Ektar-100 vs Lomography-100 for instance.
But then if you process some negative in CP which doesn't produce the expected result and you keep facing a cast you can't get rid of, then you just create an ad hoc film/emulsion profile, ie. you just use the gamma tools, store it and load it next time you process same emulsion. So in fact film emulsion profile isn't mandatory, but requires just more tedious manipulation on lower level parameters, which can be avoided by already making this step in beforehand film profiles.

these algorithms, NLP, Grain2Pixel, Filmlabapp, CP Negmaster, they all make assumptions before or after.
NLP provides a menu of ready effects basically: do you want a Frontier-like rendering or a Noritsu-like rendering? It provides LUT's, "moods" filters, Kodak or Fuji or Cinestill like flavors. Tells you may sample the film ...or not, you may set a WB in LR before or not ... you may pick a pre-saturation ... or not, you may use tones profiles .., or not. Why not, it may be more user friendly, offers a choice of flavors instead of RGB CC, WB, BP, gamma, hue, saturation, zones of grey, highlights, shadows thresholds, etc, cursors and values.
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…