Completion of density curves?

pasopvoordehondkl.jpg

A
pasopvoordehondkl.jpg

  • 0
  • 0
  • 57
<--

D
<--

  • 2
  • 0
  • 112
The Bank

A
The Bank

  • 0
  • 1
  • 179
Kildare

A
Kildare

  • 1
  • 0
  • 414
Sonatas XII-27 (Homes)

A
Sonatas XII-27 (Homes)

  • 0
  • 1
  • 502

Recent Classifieds

Forum statistics

Threads
199,318
Messages
2,789,570
Members
99,871
Latest member
semdot14
Recent bookmarks
0

Romanko

Member
Joined
Sep 3, 2021
Messages
889
Location
Sydney, Australia
Format
Medium Format
Let me ask a question first. Do you have a densitometer?
I am not quite sure how a densitometer could help with converting from a RGB digital image to a monochrome digital image. It certainly helps with analyzing a negative but as far as I understand the OP's process is purely digital.
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
Let me ask a question first. Do you have a densitometer?
No, I don't have a densitometer.

This is a very good point. Are you using a RAW file or a processed image file? I suggest you start with a RAW file with linear gamma.
I am using "basic" image files, but my system works in the OKHSL color space, which implies : RGB -> linear RGB -> XYZ -> OKHSL - so the "gray" data from the image is linear. I chose OKHSL because the whites - blacks - grays are well distributed on the gray scale.
But, yes, I could use RAW files too, but I chose not to for now. Later surely, to have better precision.
 

Bill Burk

Subscriber
Joined
Feb 9, 2010
Messages
9,332
Format
4x5 Format
Looks good! Your results came fast!

Can you add a dimension of spectral sensitivity? Try a simulation of orthochromatic film for example or different types of panchromatic film.

It also looks like you have “reversed” each image, turning it into a positive. I suspect that you may have made a linear transformation (I think you just confirmed) when the reality is … printing paper has its own curve. Specifically paper has a more pronounced S-shape which superimposes itself onto the film curve.

I don’t know if you considered the different curves of greater or lesser development.

Ansel Adams often fought against what he called “soot and chalk” of your overexposed example. In the first hundred years of photography people who enjoyed photography didn’t always have good lab practice Not knowing better, they would overexpose and overdevelop film.

You could turn your system into a modular one. Then you could dial in exposure and development of the film separately from the print.

If you want more fun you could try superimposing the curve of a reducer. For example a superproportional reducer can change the film curve
 

Bill Burk

Subscriber
Joined
Feb 9, 2010
Messages
9,332
Format
4x5 Format
Never heard of OKHSL. Is it a single person’s model (Björn Ottosson on Github)?
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
Never heard of OKHSL. Is it a single person’s model (Björn Ottosson on Github)?

Yes, it originated from Björn, but as he published it with a "libre" license, OKLAB (and derivatives) found its way in the last CSS specifications, and even very quickly in some Photoshop functions in 2021 using color gradients.
I studied this color space, and it was waaay better than CIE Lab*, in terms of computation complexity, color space uniformity, and color distances computation that are just an euclidian distance vs CIE's infamous CIE DE2000 function (patches on patches instead of reconsidering the CIE Lab* base, and that's what Björn exactly did).

It also looks like you have “reversed” each image, turning it into a positive

Yes, I am aware and convinced that a photo is something that has to be "printed" (errrm, developped). It will be easy for me to add the final step with general print paper curves with the possibility to vary the effects (more or less contrast, tinting, etc) + even adding an "outside" aspect (glossy, matte). In fact my system is already modular, it is just that for the moment I am focusing on the "pure" film part. The rest will come after.

And by the way, the original question (completing density curves) is resolved, now that I can use the density curves "as is" without modifications, once again thanks to you all, it made me thnk the right way.
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
Can you add a dimension of spectral sensitivity? Try a simulation of orthochromatic film for example or different types of panchromatic film.

Of course I can, it's a matter of minutes, just copying data profiles I created for other films to replace the "neutral" spectral curve I used above.
For the moment I created about 20 profiles for different B&W brands/films, about ten for color films, including Polaroids. I even found lately different collodion mixings data somewhere.

If you want more fun you could try superimposing the curve of a reducer. For example a superproportional reducer can change the film curve

I'm always ready for fun, but all will be done in good time (or not). I can already stretch vertically the density curves.
In fact I started this all because I wanted to check the results of several Photohop plugins pretending to transform images to "films" versions. But finally what caught my eyes was that results were inconsistent, each plugin had its own version of what would be the result with, for example, Ilford HP5+.
It started with curiosity, continued with science, and most of all it was (and still is) fun, and I learned a great deal and it is not finished :wink:

I'll add tests here when I come back from work.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,186
Format
Multi Format
If you don't have a densitometer you could use a digital camera to simulate one. It would take a few accessories (a cheap microscope objective, extension tubes, a couple of adapters, and some sort of light table).

I explained how to do this in a thread titled "Using a digital camera as a densitometer", started July 11, 2022.

My thought is that you would not need to take many measurements in order to draw in the approximate shape of the curve. Just measure the base plus fog for the appropriate development conditions, and then sort of eyeball the shape of the curve between the curves that Foma gives and the base plus fog region. This should get you a not-too-terrible result for the toe region.

The high density region is a little trickier, but you could measure the density for a heavily fairly over-exposed film developed for the conditions that Foma lists. I would probably avoid doing it for an extremely over-exposed region. I don't know, but maybe few stops (perhaps ~4 stops?) above the upper end of the curve that Foma lists. You will want to avoid the exposure region likely to give density reversal. That would give you a fairly good value for the maximum density. Then eyeball the shape of the curve going from the curves that Foma gives and the max densities that you measure for the different development conditions.

The scheme outlined above would give only an approximate result, but it would probably be a useful result, especially for regions of the curve that are not too far off from the ends of the curves that Foma gives, and it would probably be more accurate in the toe than in the shoulder.
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
You asked for it... here is the same photo with Mees' book's density curve and Ilford Ortho Plus spectral data.
Is it ortho enough for your taste? -> the red lips and the dress are now almost black. And she has even more fantastic eyes 😋

Typhaine-Forêt-078-1024-palette-ortho.jpg


Bonus: the ortho plus spectral chart I used (this is from the interface of the program) - as you can see my spectral data is always normalized because manufacturers don't present data the same say. That's the reason why I can't have "real" values for exposure.

Screenshot at 2024-05-21 22-38-26.jpg
 
Last edited:
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
Now with a Ilford HP5+ spectral chart, and the same Mees' book's curve

Typhaine-Forêt-078-1024-palette-hp5+.jpg


And just for fun the same with a Hoya R1 filter (red) - yes I implemented color filters too, I did all the interesting actual Hoya filters, excluding ND ones (22 of them), and all old Wratten filters I could get data for (128 of them). For the wratten filters I extracted all the data in the charts from an old Kodak PDF file, semi-automatically. Thanks Linux and all the marvelous tools you can find for this sort of things!
Now Typhaine looks a bit out of this world.

Typhaine-Forêt-078-1024-palette-hp5+-hoya R1.jpg


If you see something that seems off: just tell me, maybe I did something wrong in my implementations.
 

Bill Burk

Subscriber
Joined
Feb 9, 2010
Messages
9,332
Format
4x5 Format
Those are great! Now film developing has curve families depending how long you develop. As you discovered great overexposure can lead to reversal.

But what are the effects of simulating (trying to find) the exact right exposure with variations in developing times that suit it best.

My example curve family is TMAX which has a very sharp toe, but if you take the Mees curve (or a Tri-X curve) and give varying contrast, you can see a benefit of placing some of the shadow detail on the toe). For examples of using the toe to pictorial advantage, I would refer to Way Beyond Monochrome by @RalphLambrecht

Then (the reason for making it modular) match the print “paper” contrast to the negative (so contrastier paper to match a flatter negative). A high contrast negative printed on low contrast paper may look the same as a low contrast negative printed on high contrast paper but at the extremes there will be weird results (like your last one that didn’t look “quite” as good as the others.

You may be able to find the best, which is likely to be the equivalent of ASA exposure on film developed to 0.62 contrast printed on grade 2 paper in a diffusion enlarger.

IMG_6195.jpeg
 
Last edited:
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
Now film developing has curve families depending how long you develop

I don't know if sticking that much to reality will be as effective as obtaining a direct positive, then using classic retouching functions on it, if the result of virtually printing on paper is mostly about contrast? As I wrote above, there can be nice things to add too, like tinting, paper surface aspect, paper grain, etc.

For the moment I'm just trying to simulate the film properties as well as I can, and you (all) helped a lot to make me understand I was wrong about the way I simulated the grayscale from densities, and now I don't even have to think about completing curves, my system is good enough to exploit the raw curves without any adaptation, the solarization test was convincing.

The next step is perfecting film grain, I already achieved good results but I am not that satisfied, I'll surely ask other questions later.

And frankly I appreciate the spirit of these forums, it feels like I'm in good company.
 
Last edited:
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
I am wondering what algorithm you used to implement filters

I am already using spectral data for the films (you saw ortho and panchro example), so all I had to do was to find a way to combine with it the spectral data of (color) filters. This kind of data is easily available from the manufacturers' technical documents. These ones are NOT normalized because the amount at each wavelength is important (for example a 20% red filter vs a 50% one).
I am also thinking about the possibility to change the light source (daylight, tungsten, etc), this is all about spectral data too. For the moment I am just using a "flat" spectral curve.
 
Last edited:

Romanko

Member
Joined
Sep 3, 2021
Messages
889
Location
Sydney, Australia
Format
Medium Format
I am already using spectral data for the films

Your Ortho and Panchromatic examples look very convincing. I would like to understand how the spectral sensitivity curve of a film or film with a filter is applied to the input RGB image. I tried to figure out how to convert from (r, g, b) to wavelength but could not find a satisfactory solution.
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
Your Ortho and Panchromatic examples look very convincing. I would like to understand how the spectral sensitivity curve of a film or film with a filter is applied to the input RGB image. I tried to figure out how to convert from (r, g, b) to wavelength but could not find a satisfactory solution.

Ah, this a good question, how to go from RGB to spectral, much debated. There are naive solutions (that almost work) and complicated solutions (too much computing power is needed). Always go the "way of the middle"...

For the rest: even for B&W results, I take the RGB color values of each original image pixel (linear values, in fact an intensity). I compute from this their wavelength levels, discretized with a 10nm interval between 380 and 730 nm, which gives a new spectral chart for this color (using some specially crafted data derived from daylight spectrum). Using the same system for all my curves, I can add, subtract, multiply and divide, combine them to obtain different effects (additive and subtractive color mixing for example).
For color filters, it is just a multiplication, after preparing a special spectral chart from the original. All is a queston of discretized intensity levels.
Finally I use the discretized values of the "final" spectrum to compute a global "level of energy", that is used with the HD curve as the "exposure", and that gives me a density for each original pixel.
Color films work the same way, they are just B&W layers on top of each other, responding to a certain part of the spectrum. Modern negative films have a fourth orange layer that must be considered in the process.

This is why my examples look so good, it's because all the computing done was made from real physical principles! Well, a bit tuned, but the basic principles are respected.

I could check what I produced with digital vs film photos examples found on the web, using the same point of vue, so comparison is easy. I took the digital version, applied the same "film" and compared my result with the "real" film version. That's how I could improve and tune the way of computing the whole process, trying to stick more and more closely to "real life" examples.

This is the first time I'm showing this to people who have much more analog photography experience than me... and my work seems to be appreciated. If other people want to comment no problem :smile:
 

Bill Burk

Subscriber
Joined
Feb 9, 2010
Messages
9,332
Format
4x5 Format
I don't know if sticking that much to reality will be as effective as obtaining a direct positive, then using classic retouching functions on it, if the result of virtually printing on paper is mostly about contrast? As I wrote above, there can be nice things to add too, like tinting, paper surface aspect, paper grain, etc.

For the moment I'm just trying to simulate the film properties as well as I can, and you (all) helped a lot to make me understand I was wrong about the way I simulated the grayscale from densities, and now I don't even have to think about completing curves, my system is good enough to exploit the raw curves without any adaptation, the solarization test was convincing.

The next step is perfecting film grain, I already achieved good results but I am not that satisfied, I'll surely ask other questions later.

And frankly I appreciate the spirit of these forums, it feels like I'm in good company.

Thanks!

I think the benefit of simulating the film and print separately including the contrast variations is that you are not trying to make the “best” picture, you’re trying to simulate how the constraints of silver shaped the expression of black and white film photography.

You might even be able to reverse engineer questions like how did Hurrell get that look?
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
You might even be able to reverse engineer questions like how did Hurrell get that look?

Ah, this is another subject. Technically, maybe some clues could be found, but there's also something intengible. Beauty is in the eye of the beholder.

Reversing the process would show the "original" scene (maybe even the original colors with more work), but it would loose all the charm - and I don't want to loose that :wink:
 

Bill Burk

Subscriber
Joined
Feb 9, 2010
Messages
9,332
Format
4x5 Format
Ah, this is another subject. Technically, maybe some clues could be found, but there's also something intengible. Beauty is in the eye of the beholder.

Reversing the process would show the "original" scene (maybe even the original colors with more work), but it would loose all the charm - and I don't want to loose that :wink:

I’m thinking something along the lines of a set of “pickers” where you dial up negative contrast and exposure, dial down print contrast and exposure, dial in color filters, dial in spectral response and show what a black and white print of a scene would look like with those choices
 
OP
OP
AbsurdePhoton

AbsurdePhoton

Member
Joined
May 13, 2024
Messages
70
Location
Paris, France
Format
Hybrid
I know it looks like I was fast, but frankly using transmission instead of a defined gray scale took me half an hour, just to change a bit the code and test, but it was very simple. I've been on this project since 3 months in my spare time, the rest was already done. I only sleep 4 and a half hours per night, that helps too (I'm a creature of the night).

This said, your idea looks simple at first sight, but when I thought about it later, it is not that obvious:

- three main chained operations produce the final gray values, the complexity is high
- uncertainty about clipped values in dark and bright regions: 1 to many possible values
- find a way to "compress" the data to feed it to my reverse spectrum function, which works in a pre-defined range
- uncertainty about colors: given a gray value, there are often several "inverted" solutions - this problem for the moment was partly resolved with machine learning, but nothing entirely automatic works out of the box (I know, I tried several)
- uncertainties about the parameters at the beginning: which values did the photographer use ? Camera, lens, shutter speed, film (films differ from batch to batch), how the film was really processed, etc etc
- and finally the main problem for me about this: the interest. Is it really worth the effort? I had a similar feeling when I discovered that some of the famous photographers in the digital era took horrible photos from the start (bad lighting, composition, etc). A lot of their magic relies in fact on post-production. "Old/historic" photographers didn't have such possibilities and so couldn't "cheat" as much as now. So for me debunking this question ("how did he do that") is not really a good queston because that is only technical. Give a good photographer a bad camera and his shots will be at least very good, the reverse is not true. And that, you can't quantify
 
Last edited:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom