• Welcome to Photrio!
    Registration is fast and free. Join today to unlock search, see fewer ads, and access all forum features.
    Click here to sign up

Algorithm to find characteristic curve

Joined
Jan 7, 2005
Messages
2,724
Location
Los Angeles
Format
4x5 Format
I've written a four quadrant reproduction curve in Visual Basic. The plotting is done as a simple X,Y graph. The analysis is done with the VB point function. It's a relatively simple approach using basic math skills. And yet it's been invaluable.

Attached is an example.
 
Last edited by a moderator:

Photo Engineer

Subscriber
Allowing Ads
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
This is the correct approach. I used it many times at EK. However, you must also include the silver criterion of the negative film caused by the tone of the silver image which is not neutral.

PE
 

alanrockwood

Member
Allowing Ads
Joined
Oct 11, 2006
Messages
2,190
Format
Multi Format


I'm not a math major, but I have had courses in numerical analysis, and I have dabbled with curve fitting a bit over the years, including fitting of characteristic curves for my own amusement and for film testing.

There are several concepts you should keep in mind. First, there is probably no mathematical function that is perfectly adapted to the fitting of characteristic curves, so you are going to have to chose between the lesser of evils.

Second, make sure you are "fitting" the curve rather than "interpolating". By this I mean that you should seek a smooth curve that goes "close to" your data points without trying to go "through" all the data points. The reason is that experimental data has "noise", i.e. statistical uncertainty. If you try to find a function that goes through all the points you will actually be producing a lower quality result than if you find a smooth curve that goes close to the points. In effect the noise will be the killer.

Third, you should try to find a fitting curve that has a functional form (i.e. "shape") that looks quite a bit like the curve you would get if you had densely packed noiseless data from your experiments. There are several reasons for this, but the most important reasons are that if your fitting function looks a lot like the underlying functional form of the data your errors are likely to be lower than if you used functions that do not look much like the underlying functional form of the data, and also you will be able to fit the data with the fewest number of adjustable parameters.

The paragraph above implies that the optimal functions are probably not polynomials or cubic splines. Those functions do not have a shape that naturally mimics the shape of a characteristic curves, and they will require you to use more adjustable parameters than if you started with more suitable curves. Furthermore, you are more likely to encounter "pathalogical" behaviors with those functions. Polynomials are especially bad in this regard, and can often produce poor results for data having asymptotic tails (like the toe of the characteristic curve) and noise. What can happen is that in order to get an apparently good fit to the data you will have to use a high order polynomial, and then your equation may tend to have wild oscillations between the data points and in the range outside the data. In fact, I can guarantee you that a finite order polynomial will not fit the toe-end of the curve if you were to extend it to zero exposure.

Cubic splines are probably better, and according to PE have been used successfully. However, there are still some potential pitfalls. Each segment of a cubic spline requires four adjustable parameters (i.e. "degrees of freedom"). This means that for a single segment fit you have to have more than four points in the data set. If you have exactly four points then you are "interpolating" the data, i.e. making the curve go through every point, and then you become susceptible to noise and oscillations between the points. (If you have less than four points then there is not a unique answer, i.e. there is an infinite number of different answers that would all pass through every point of the data, but I digress.) For every additional segment of spline you include in the fit there is a requirement that you have more data points. It is not four additional points for each segment because you use up some degrees of freedom when you constrain the curve to match the ends between the segments in certain ways, but nevertheless you will need to have a lot of points if you use very many segments in a cubic spline fit.

A more suitable set of functions would be the integral of some kind of bell-shaped curve, or in some cases the sum of more than one such curve. There are a lot of possibilities you could try, but I have had some luck using the integral of scaled forms of so-called "Gaussian" curves, also known as the normal distribution in statistics. If you only want to fit the toe of the curve along with part of the curve somewhat approaching the linear region then you might be able to get away with a single (integrated) Gaussian curve, plus an additive constant (offset) representing base+fog. If you can determine base+fog separately then you can subract it from the data and dispense with the additive constant in your fit. In the first version (including an adjustable constant offset) you would end up with four adjustable parameters, three of which come from the Gaussian function and one of which is the offset. If you use the second version then you have three adjustable parameters, all from the Gaussian function.

I would bet money that you will end up with a much better fit this way than if you started with polynomials or cubic spline functions having the same number of degrees of freedom. When I say "better" I mean that it is likely to be a truer representation of the underlying physical function and less subject to noise. Speaking of noise, this general scheme results in a smoothing effect which tends to minimize the effect of noise on the quality of the final result.

You can also add more Gaussian functions to the fit. There are positive and negative consequences to doing this, which I will not go into right now.

In fitting a function you will need to pick a so-called "objective function" which provides a measure of the goodness of fit. There are a lot of choices, but the overwhelming favorite is to minimize the sum of the squares of the differences between your data and the fit. These differences are called "residuals."

One disadvantage of the scheme I outlined above is that you will end up doing a so-called non-linear fit, whereas with certain other combinations of functions (e.g. polynomials) and objective functions (e.g. sum of squares of residuals) you only have to solve a linear set of linear equations. In addition, the scheme I outlined can be sensitive to the quality of your initial guess, and can converge to a "bad" solution if the initial guess is way out of line. However, with modern computers it is not too difficult to do non-linear fits, and it is not too difficult to pick an initial guess that will converge to the right solution.

My favorite program for doing non-linear least squares fits to user-supplied functions is called "psi-plot". It does most of the math for you. However, I don't think it would interface well to the problem you have in mind due to the issue of putting the data in and getting the results out in a convenient way. "Excel" might have the ability to do non-linear least squares, and some other vector-oriented packages no doubt have this capability as well. However, you might have to end up writing your own program, which would be somewhat of a pain in the neck.
 

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format
The plots Steve does are beautiful.
 

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format

dpgoldenberg

Member
Allowing Ads
Joined
Feb 18, 2009
Messages
50
Format
Med. Format RF
I have done quite a bit of this sort of curve fitting in my work as a biochemist, and I agree with everything that Alan Rockwood has said, assuming that I understand the intentions of the original poster.

Here is a function that might be useful:

D = a1 + a2/(1 + a3*exp(-a4*X))

D = density
X = exposure
a1,a2,a3,a4 are all adjustable parameters that define the shape of the curve.

"exp(-a4*X) " means e (the base for natural logarithms) raised to the power -a4*X. There is no critical reason to use e: 10 or 2 or any other positive number would work as well, but the values of the fit parameters will be different. As the function is written, a4 should be positive.)


This function describes a symmetrical "s-shaped" curve that seems to be a reasonable approximation of a film characteristic curve. So far as I know, there is no theoretical basis for using this functionn for this application; but it appears in various forms in biochemical applications.

The parameters have the following significance:

a1: The extrapolated density at very low exposure.
a1+a2: The extrapolated density at very high exposure
a2*a4/a4: The maximum slope of the curve, found at the mid-point where D=a1+d2/2.

To use this, a program that can do a non-linear least-squares curve fit is necessary. One that I use a lot is Kaleidagraph: http://www.synergy.com/
It's commercial, though a free demo is available. A free plotting and curve-fitting program for the Mac is Plot: http://plot.micw.eu/ I haven't really tried this program, but it looks like it should work for this purpose.

I have tried fitting this function to some of my own film curves, and it seems to work well. I also tried to fit it to the sample data in Phil Davis's book, "Beyond the Zone System", and it worked well with that data as well. I'll try to upload a picture.



Using a fit curve might be a good way to analyze film and paper data, providing a means to estimate contrast and speed values in a more objective way that is less sensitive to small errors.

I hope this is useful to someone.

David
 

Attachments

  • btzs_sampleCurves..jpg
    40.8 KB · Views: 100

Michel Hardy-Vallée

Membership Council
Subscriber
Allowing Ads
Joined
Apr 2, 2005
Messages
4,794
Location
Montréal, QC
Format
Multi Format

Wow, I've been thinking about ways to do that for a long time, but never knew where to start (programming skills are rusty). I nominate you for Most Useful Post on APUG in 2009 !!

I've read about quadrant analysis from the Kodak Encyclopedia of Practical Photography, and it was really the first time I understood something about tonal control. I've been meaning to try it out one day.

I'm especially interested in the flare quadrant. How did you measure it?
 

Photo Engineer

Subscriber
Allowing Ads
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
Well, I feel forced to repeat myself.

The 4 quadrant method is very good, but you must include the slope of the silver density at the wavelength(s) you are printing. This is critical to the analysis when doing Azo (or Lodima) printing and printing with negatives developed in staining developers.

This was discussed in a paper by W. T. Hanson from EK many years ago. I'll have to look it up I guess. The feature to be concerned with is "the silver criterion".

PE
 

alanrockwood

Member
Allowing Ads
Joined
Oct 11, 2006
Messages
2,190
Format
Multi Format
David,

Your fits look really good. Based on those results it looks like your proposed fitting equation works at least as well as the integrated Gaussian curves I have used and discussed above. In fact, your function may even be better!

It looks like there is not much to improve upon, though in the figure it is a little hard to tell how good the fit is down in the corner of the toe. However, just for sake of discussion, have you tried fitting with a seven parameter equation that might look something like D = a1 + a2/(1 + a3*exp(-a4*X)) + a5/(1 + a6*exp(-a7*X)) to see if there is much improvement.

Alan
 
Joined
Jan 7, 2005
Messages
2,724
Location
Los Angeles
Format
4x5 Format
I'm especially interested in the flare quadrant. How did you measure it?

They are generated using the standard exposure equation:

H = (q*L*t/A^2) + Ef

H= film plane exposure in lux -
q= luminance equation constant - 0.65
L = scene luminance in nits
t = exposure time
A = lens f-number
Ef= flare -

L is based on a log 2.2 range starting at 100% reflectance. I've found the average illuminance to be 7860 footcandles. L for each step would then be average illuminance * antilog reflection density / pi.

Average illuminance comes from the incident light meter calibration equation. I believe this also helps explain the Sunny 16 Rule.

A^2/T = I*S/C

I = (A^2/T*S)*C

This can be reduced to

I= A^2 * C

A= f-number - f/16
T= shutter speed - 1/125
I= illuminance in footcandles
S= Film speed - 125
C= constant = 30

Attached is an example. This example was originally created to illustrate the influence of flare. It's in third stop intervals and when using 7680 footcandles for the average illuminance, the major exposure values fell between the steps, so average illuminance was adjusted slightly, 7200 fc, to help clarify the affect from flare.
 
Last edited by a moderator:
Joined
Jan 7, 2005
Messages
2,724
Location
Los Angeles
Format
4x5 Format

I've seen Reproduction curves with far more than four quadrants, each one incorporating a different factor. It's all a question on how precise you need to be, or for most of us, able to be. For instance, before and after the reproduction curve quadrant there should have something about the viewing conditions such as surround and the psychophysical influence like eye adoption in a lighted room, but is it practical to do?

Do you happen to have the name of the publisher of that Hanson paper?
 

Photo Engineer

Subscriber
Allowing Ads
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
Stephen, I think that it was never published outside of Kodak. I will try and get more information but that is what I found when doing a search of articles published. I do have the curves for silver in visible (blue) and UV light though, and I have done the same work with color materials where the dyes and illuminant are changed. The UV/Visible (red) data are in "The Physics of the Developed Image" by F. E. Ross and can be found on p83 where he discusses variations in contrast in prints due to variations in exposing light. He does not use the words "Silver Criteria" which Hansen applied due to using it in color.

Generally, the conclusion was that the shorter the wavelength, the lower the contrast.

This goes back to an earlier post. We used carbon step wedges to eliminate this problem as it relates to the color of silver metal particles as a function of wavelength.

PE
 

Photo Engineer

Subscriber
Allowing Ads
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
I'm not sure. The article only relates to printing contrast of film onto paper showing how it varies as a function of the exposing light.

I've been looking for my copy of Hansens paper and cannot find it. I probably put it in a safe place and forgot where it was. I'll keep looking.

PE
 

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format

T'is certainly so, and hopefully someone that's getting this advanced enough into sensitometery would know that.

It mostly means don't use the red channel when measuring film for AZO and the like.
 

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format
PE - or you mean the wavelength of the exposure through a silver-based stepwedge has an effect beyond the sensitivity of the film to said wavelength? I think that's probably way beyond the ability of anyone here to be able to measure, let alone be able to reproduce consistently... We don't have 4 decimal place densitometers like you guys at Kodak had.
 

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format
Anyway, I'd recommend for people seriously interested in this subject that they get a copy of "Sensitometry for Photographers" by Jack Eggleston. Steve Benskin recommended it to me and it's by far the most helpfull book I've read on the subject.
 

Photo Engineer

Subscriber
Allowing Ads
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
Kirk;

If you expose a paper with blue/uv sensitivity to a silver step wedge using blue light, you will get one response due to the "hue" of the silver metal. Exposing again using uv light you will get a different response due to the change in the "hue" of the silver metal. If you had a panchromatic paper and exposed it just to red light, you would get as yet another repsonse. According to the article I cited and others, contrast goes down with shorter exposing wavelength with a single given step chart made of silver.

Hansen knew of this work years ago and that is why we used compensation factors in our exposures, or we used carbon step wedges. And, the amount appears to be quite substantial. I have measured the contrast vs wavelength using both types of chart, but this was many years ago when I was first learning photo systems engineering at EK.

Still can't find the Hansen paper, which I now believe was an internal monograph.

PE
 
Last edited by a moderator:

dpgoldenberg

Member
Allowing Ads
Joined
Feb 18, 2009
Messages
50
Format
Med. Format RF
Alan,
Here is an enlarged view of the toe region. As you can see, there are deviations from the fit curve. I should emphasize that the data I used are probably not based on real measurements. They are provided in Davis' book as an exercise in plotting. In general, a function like the one I suggested will probably do well in fitting the central part of the curve and show larger relative deviations in the toe and shoulder.
All of this makes me wonder how well the toe of a curve can be determined using the methods that most photographers have available. My guess is not very well, which then leads me to wonder about the claims for developers that do miraculous things to enhance the toe region and increase film speed . . .

Your suggestion for a 7-parameter fit reminded me of a quotation attributed to John von Neumann: "With four parameters, I can fit an elephant. With five I can make him wag his trunk." I did try the function you suggested, but what happened is that the function was just partitioned into the two parts, with the parameters representing the relative contributions of the two parts (a2 and a5) ill determined.

David
 

Attachments

  • btzs_sampleCurves_toe.jpg
    38.1 KB · Views: 93
Joined
Jan 7, 2005
Messages
2,724
Location
Los Angeles
Format
4x5 Format
According to the Focal Encyclopedia of Photography -

Gamma-Lambda Effect
"For specific film and processing conditions, the change in gamma with the wavelength of the exposing radiation. Gamma is relatively small for short wavelengths (ultraviolet radiation and blue light) and at a maximum for wavelengths near the center of the visual spectrum, that is, in the green region."
 

Photo Engineer

Subscriber
Allowing Ads
Joined
Apr 19, 2005
Messages
29,018
Location
Rochester, NY
Format
Multi Format
I understand that, but are they talking about the light exposing the film or making the print.

I am exclusively referring to making the print.

Wavelenth of exposing light may alter contrast, but not the contrast of the image produced in terms of silver contrast density.

PE
 

Michel Hardy-Vallée

Membership Council
Subscriber
Allowing Ads
Joined
Apr 2, 2005
Messages
4,794
Location
Montréal, QC
Format
Multi Format
Stephen, does your explanation mean that the flare curve is calculated, and not measured?
 

OldBikerPete

Member
Allowing Ads
Joined
Aug 29, 2005
Messages
386
Location
Melbourne, A
Format
4x5 Format
I may be a tad thick here but for computer usage, why try to generate a characteristic curve? Why not just put measured points in a lookup table and do a linear interpolation between the points?
Generating a characteristic curve may be necessary if you want four-figure accuracy or better, but for photography usage you dont need that sort of accuracy.
 

Kirk Keyes

Member
Allowing Ads
Joined
Jun 17, 2004
Messages
3,234
Location
Portland, OR
Format
4x5 Format

That was my suggestion at the start. We don't need to generate a 3-part splined cubic function to find 0.1 over base+fog... That's way overkill.