I have a copy of photoshop CS2 from... a really long time ago (2005), that I paid around $150 for (discount through some organization or special, or... I don't remember).
I can still use it. I've got the zip file somewhere that I made from the install media, I've got the activation key, and the last time I tried (admittedly, some time ago), it ran just fine. I might even have a lightroom key from around the same time, because I was given a license (legally) for an early version of lightroom. I might have even been an early release user, I don't recall.
$930 is too much to pay for both, when I can get a comparable editing tool for $50 (current price on Affinity Photo), and a "free" copy of Darktable. Paying rental is a good deal (especially for Adobe, because they're not reliant and regular purchased for their revenue stream)-- right up until Adobe decides to change their licensing model again (they've done it multiple times already), and now I've got all these .PSD documents which might work in another piece of software-- or might not.
I've seen way too many software and hardware projects that relied on "Da Cloud" (or it's predecessor) be shut down and turn into useless bits of paper or hardware.
Rental is fine if you don't care about being able to access your work next month.
You will never get good inversions on frames like that. I am Grain2Pixel creator and I can tell you that there isn't a single frame algorithm capable to invert an image like you posted with amazing results. Simply put, because there isn't enough color information to make a good color balance. All color balance algorithms are relying on colors in order to set the gamma values on each channel and also the input and output levels/curves values on each channel. Without enough color, the color balance will be fooled and the results will start to be only an approximation.
One way of fixing that is to use a carefully crafted profile which will cover that combination of film + sensor + light. That implies you will shoot a frame of that stock with a color chart in 5600K, develop it in fresh C41, then scan, then manually create a set of adjustments on a calibrated screen to match the color chart with the reality. Then, your snowy cottage will invert just perfect. This procedure is a little difficult and it works only with films developed in identical conditions and you will have to craft profiles for most of the films you are using.
Another way is to use external color information within the same roll. Assuming that your roll contains various shots with enough color diversity, you can use other frames information to color balance the snowy scene. This is a new feature I implemented in Grain2Pixel and it's in beta stage. It's called Roll Analysis. It takes a little longer because it has to parse first the entire roll and grab color information from each frame but the results are much more richer and stable compared with single frame inversions. Once roll analysis done, it's reused for any frame from the roll. This method works only of the selected frames to be analyzed are having enough color information. If the roll consists in just 36 exposures of the same scene (and that can be a possible scenario) then I would recommend manual conversion.
Did you know that my Ilford paper EXPIRES even though I paid money for it and OWN it?
So convert the PSDs to TIFFs or another universally read format. Unless they're vastly complex photo montages that you wish to edit continuously until the end of time, this is an easy solution. Yes, you can license a copy of Affinity Photo (which is really just a perpetual rental if you read the TOS). But we're talking about Adobe software here, not Affinity. Since neither software is expensive, and you aren't locked into using PSDs (LR does not at all), and only LRCC, not classic or PS, relies on the cloud, I really don't see what the controversy is. Did you know that my Ilford paper EXPIRES even though I paid money for it and OWN it?
yes, but two points:
1) what are the assumptions for the renderings offered, upon what logic? Because inversion should be computed relatively to the colorometric intrinsics of the picture. Try to be hi-fi. Interpretations are more for BW negatives.
2) very often when the default LAB Standard differs much from reality, the variants are not much better. Not always, many times NLP render an ok positive, but many other times I found it wonky relatively to the default computed by ColorPerfect.
ColorPerfect doesn't offer a menu of possible white balances, undocumented pre-established renderings and scanners specifics (Noritsu, Frontier) like NLP, it produces ONE rendition based upon the film emulsion, and offers a bunch of colorometric parameters down to fine tuning by zones.
anyway, since you mention the other renderings of NLP, for that former negative posted earlier, here are, left to right NLP LABsoft, Linear and Flat. They still are not there in some of the snow and with the red bin to the right. "Flat" in this case would be indeed a good base for few tweaks. But CP (ColorPerfect) provides a better one when I have the image in higher resolution on the display.
View attachment 263773
I spend four/five months surrounded by snow, and november-february are short days like 11:00-14:30/ 10:00-16:00 and shorter the more north I spend a week-end. Also the sun then is low on the horizon, with very long twilight, it's not illuminating from the top over your head but from a "side", which means in case of blue clear sky, that it gets on your way easily. This can be a PITA or be taken advantage of.
I bought NLP end March, I ditch it now, not only after recent pictures, but also I did process bunch of pictures taken before I bought it, that I processed "by hand" in Gimp.
Here 3 scans in DNG format as per NLP requirements, Epson V700 @2600dpi 48bits, ~140mb each, film is Ektar-100:
https://yadi.sk/i/uBvUIQnfupdYnw
https://yadi.sk/i/cz4O4IQEklWi4g
https://yadi.sk/i/gTaL7ZLQLuLwNw
now the positives obtained with ColorPerfect set to Kodak Ektar-100 on the left, with NLP default (LAB standard) with "WB setting" to "Kodak", on the right.
1st negative:
the best match is on the left (CP), blue must be corrected, blue must be added to restaure the deep blue sky, but otherwise the illumination of the square, the dark golden reflection of the dome on the left, the light/shadow and darkned tone of the paint on the lower building are ok, BECAUSE I took the shot late evening (well it was ~14:00 but sun was getting down). It's all missing on the NLP positive on the right
View attachment 263776
2nd negative:
see the evening light cast by the sun setting? It needs some correction, but that's what I was seeing when I took the shot. Again, to the right it's just wrong season, wrong exposure, wrong luminosity:
View attachment 263779
3rd negative:
again a yellow/blue cyan/red balances must be corrected, the sky like in the previous shots is messed but the fix is trivial. You can see what it is: the sun is on the horizon back the kreml, casting illumination on the towers, it's getting dark. It's golden hour.
With NLP golden hour is gone
View attachment 263781
TIFF unfortunately lacks things like history, layers (well, it has the ability to do multiple "layers", but not in the way PhotoShop and other tools do). When I save an .aphoto file with Affinity, it's got a lossless copy of my work, with adjustment layers, mask layers, may have one or more snapshots so I can test multiple approaches, and has custom-defined export "slices" so I can export multiple versions of the file at once.
you must have a crazy old version of Photoshop. I never save things in psd format. It’s always in TIFF, and layers, layer masks, and all that stuff works fine with tiffs. LR even reads and correctly renders the tiff file.
If so, that's a very nice step by Adobe into the realm of non-proprietary lock-in. But I'm still not interested in dealing with a license that may or may not be mine, depending on the whim of someone within Adobe.
Today I learned... I was aware of TIFF holding multiple layers, but had no idea I can save documents with non-destructive adjustment layers + masks into TIFF, and open them in another application. Just tried this with Affinity Photo and it works fine.
@pwadoc The concept of a color space is not applicable to image capture and manipulation. A colorspace is just a method of optimally mapping colors to limited set of numbers (to optimize its reproduction on a medium with limited capabilities). Film, papers, digital sensors (any physical object) do not have a "native colorspace" (this is why I think you're confusing colorspace with a gamut) and therefore the conclusion your blog post is leading with, that scanners, papers and cameras somehow have incompatible colorspaces and require "conversion", or if sensors should be designed specifically for film, is incorrect, sorry. Play with a RAW converter and notice that a colorspace comes into play only when you export the end result. A RAW converter has access to full gamut captured by a sensor, and that's what is available to you when you're inverting or doing other alteration to an image.
This is a clever analogy, I'll grant you that. May as well keep going further and say that any image acquisition, such as taking a photo of a cat with a digital camera, is a "color space transformation". Dyes in a negative are no different from dyes on the cat's collar.
And this is where you lost me. Inverting is not special. Doing anything with an image relies on the relationship between primaries. Side note: the curves are not parallel on film, as documented by Portra datasheet, but let's assume they are. Why would they diverge after inversion? If capturing anything is a "transformation", you should be able to demonstrate how inversion causes color shifts using a regular photo of a cat:
And this is where you lost me. Inverting is not special. Doing anything with an image relies on the relationship between primaries. Side note: the curves are not parallel on film, as documented by Portra datasheet, but let's assume they are. Why would they diverge after inversion? If capturing anything is a "transformation", you should be able to demonstrate how inversion causes color shifts using a regular photo of a cat:
You cant just take a picture of a color negative with a digital camera and expect it to invert correctly@pwadoc The orange mask is a trivial matter, once the light passes through a negative, it is already "cancelled out". Simply applying linear gain on cyan/magenta is all you need. The reason you see color casts in under/overexposed negatives is also simple: because CYM layers aren't equally sensitive to light due to the fact that they "sit on top of each other".
This leaves us with "colorspace transforms". The first "transformation" (the process of capturing) is just a rhetorical device to me. I can even argue that the process of seeing something is also a "colorspace transformation". And the second transformation never happens, as you can be operating in the working colorspace of the raw converter the whole time (like you do when you use a tool like negative lab pro, or inverting manually in Capture One, like I used to).
Again, I owe it to myself to read the long article you have linked above (looks yummy, thanks for that!) and I am open to a possibility of missing something, but so far I am not seeing anything in your comments. You are basically saying that color space transformations alter colors we see. Ok, not always, and depends on the monitor, but generally - sure. But what transformations are you talking about?
Option one: "transformation" during capture. If true, show it to me by taking a digital photo of a cat and inverting it back/fro as I asked above. You do not need a negative for that.
Option two: transformation post-capture. Which one? When I'm sitting in Lightroom I am perpetually in enormous colorspace it's using internally (do not know which one), there are no transformations happening until I export a JPEG.
You have written code to deal with RAW files, isn't RAW data linear already?
Isn't that by design? That's what defines the "look" of an emulsion, and slight contrast variations between channels should not be normalized.
* Colorspace transformations have nothing to do with negative digitization and color inversion. These two concepts are orthogonal.
* Having 3 captures with 3 light sources vs a single one with a high-quality light source and a color filter array can produce the same color.
This continues to make zero sense and leans towards "taking a photo is a colorspace transformation". I have problem with it conceptually as it doesn't map to what I have read about color management at all. I also have a problem with it because it doesn't map to my personal experience scanning and inverting color. I've been getting results far superior to RA4 lab prints I have collected from the 90s, or from my local lab's Noritsu, and never ran into the issue of "diverging curves". However, this comment does not require an answer, as I will be taking this information as an input for homework.
No doubt. I am well-aware of simple image tools. And everything in your comment makes perfect sense. Yet I don't see how any of this has anything to do with color inversion, and let me pinpoint exactly where I am lost. There are two implied concepts I currently do not accept:
Looking at the DNG spec may shed some light on #1, but #2 remains a mystery to me.
- I just can't see how two parallel lines can possibly stop being parallel after a matrix multiplication. Linear algebra is called linear for a reason. I will go and check how that transform is done, perhaps it's more convoluted than a simple matrix math. This seems to be the key statement in the blog post: that in the process of creating a linear RGB image from color negative film we're introducing color shifts in highlights/shadows. WTF.
- I reject placing any significance on film color dyes forming a special "color space" that needs to be treated differently. I do not see film dyes as being different than dyes on my cat's collar when I photograph it, and therefore emulsion contrast is irrelevant for capture and irrelevant for inversion: any minor differences in contrast between CYM layers will be captured and preserved post-inversion, i.e. they should have no effect on the end result.
We have a very accurate representation of what the actual dye colors are and what the densities are because our digital camera is very accurate, but simply inverting it won't get us there. The cyan in the negative will give us a red when we invert, but unless it happens to be exactly represented as the exact opposite hue, brightness, and saturation as the red color it was exposed to when the picture was taken on the film (it's not, it's tuned to the RA-4 paper response), it won't give us an accurate representation of what was actually captured. The only way to get there is to now do a color space transform that maps the resulting RGB values to the actual XYZ coordinates that they represent. This is why color space transforms have everything to do with scanning film. Does that make sense?
I don't see a golden hour in your pics. I see a purple hour. Basically purple in all the images you favour All the NLP images look much better, and you can adjust the exposure and you can adjust the luminosity to bring it down if you want to show more of that golden light. For some reason you are fine with having to adjust your purple pics, but have a problem having to adjust NLP pics.
NLP makes it very very easy to get the result you want. Either by using the presets, or by using the presets and then making slight adjustments. It seems that you choose to show NLP in the worst possible way to prove a point. Kinda like claiming a lens is unsharp ignoring the fact that it hasn't been focused correctly.
but what light source are you using to scan these? Some of the issues you're experiencing look similar to some problems I encountered that ended up being related to the light I was using have gaps in R9 and R12 while still having a high overall CRI rating.
This, to me, is an indicator of something not done properly inside of ColorPerfect. It does not need to know the emulsion. Its purpose is to be a "digital paper". Just like RA4 paper does not care about emulsions, neither should a color inversion algorithm.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?