Observation on Phil Davis' Senistometry Primer Articles

Joined
Jan 7, 2005
Messages
2,615
Location
Los Angeles
Format
4x5 Format
Another thread had this link to some articles by Phil Davis.

http://btzs.org/Articles.htm

Within the collection was a series of articles published in PHOTO Techniques in 1988. I actually remember these, so they must have made an impact. As I was reviewing them, something that didn't make an impression on me back then caught my eye. It is in Sensitometry Primer Part 3: Film Testing Procedures. In the far left column on page 52, Davis references the Delta-X Criterion without mentioning its name. He wrote how its method became the ISO film speed standard. He comments how using the 0.10 fixed density point for determining film speed will over expose scenes with long luminance ranges and under exposure scenes with short luminance ranges. While not suggesting the use of the Delta-X Criterion, he mentions how the Fractional Gradient Method was basically a good idea.

Phil Davis knew about the Delta-X Criterion and it's superiority over the fixed density method when processing is different than stated in the standard. His fractional density method, that takes the ISO triangle and finds curve that matches the contrast parameters and determines the relative speeds of the other curves based on that curve, is an attempt to produce speeds closer to the Delta-X Criterion for the luminance ranges and development conditions different from the ISO triangle. What's interesting is the PHOTO Techniques' article with it's brief mention, is more detailed than what's in BTZS. All he writes in BTZS, 4th edition is, "The ISO Speed point location and therefore the official film speed are not necessarily appropriate for use with any other subject range or development condition."

Something a reader could easily pass over. Which is evidenced by the many BTZS practitioners who still focus on the 0.10 fixed density method. He then goes into a multiple page explanation on how to use the fractional density method, which I can only assume people need to follow on faith since they are given no reason why it should be used. Is this a case of keeping the system simple at the expense of adequately explaining the concepts.
 
Last edited by a moderator:
OP
OP
Joined
Jan 7, 2005
Messages
2,615
Location
Los Angeles
Format
4x5 Format
Another article from the BTZS link, is "In Defense of Testing." In this excerpt, Davis is clearly familiar with an argument which frequently appears in the APUG threads.

"To avoid these unpleasant surprises, many photographers — especially those who choose to work in black-and-white with large-format cameras — test their materials in one way or another Most follow the traditional zone system test methods that involve in-camera exposure of the test films, more or less arbitrarily assigned film development times, a standardized printing method, and eye-match appraisal of the print results.

When done with care these tests can provide general guidance for the field use of the materials, but these empirical methods are neither very reliable nor very efficient for a number of reasons: for example, there’s no convenient way to calibrate the individual increments of film exposure with any accuracy. In addition, although visual appraisal of print grays can provide some indication of the overall effect of the processes, it doesn’t permit very reliable analysis of the characteristics of the individual materials. In other words, we’re not likely to know for sure what we’ve done to the materials, nor can we know for sure how they’ve reacted. Finally of course, this approach to testing is very wasteful of both time and materials.

Traditionalists defend this testing method — some vehemently — on the grounds that involving the camera in the test simulates the conditions of practical use and is, therefore, not only convenient but desirable, Similarly, they are apt to argue emphatically that, after all, the purpose of this whole thing is to produce prints, so appraising print values must therefore be the most appropriate way to judge the materials’ performance.

In fact, that’s a technical non sequitur. These traditional testing procedures can’t supply material-specific information any more than driving your car around the block can inform you about the comparative quality of your motor oil, You can obviously tell whether the car runs satisfactorily or not, but you can’t know for sure what part the oil has played in that performance. There are simply too many unrecognized or uncontrolled variables in the procedure; there is no accurate way to quantify the results of such subjective tests, and you have no logical basis for assuming that the conclusions drawn are valid."
 

JPJackson

Member
Joined
Nov 23, 2014
Messages
174
Location
NE TN
Format
Large Format
Thank you for this post. PD's articles and books are some of the most straight forward and pragmatic writings on this subject.

There are a few "jewels" to be found in the D-Max news letters as well.

With the cost of used densitometers these days, there is much less reason not to follow this methodology.
 

ic-racer

Member
Joined
Feb 25, 2007
Messages
16,549
Location
USA
Format
Multi Format
His "Fractional Density" method should be checked against Delta-X or W. Shouldn't be hard to do.
-----
five minutes later
-----
Just one sample curve I pulled up (one where the spreadsheet calculates both the ASA triangle and W-speed) showed a Fractional Density [factor 9] that was close to the "Inertia Speed" and faster than the ISO and W-speeds. (I don't have a spreadsheet that calculates delta-X from a set of data, otherwise I would have checked that also.)
 
Last edited by a moderator:
OP
OP
Joined
Jan 7, 2005
Messages
2,615
Location
Los Angeles
Format
4x5 Format

That's interesting. Thanks for doing the test. We know W-speeds and Delta-X speeds are similar. The part about BTZS that always bugged me was how gimmicky some of it was. Davis created these short cut versions of fractional gradient and flare which can introduce inaccuracies to the results and leaves the practitioner with the knowledge only of the gimmick and not the actual theory.

I have the 1st, 3rd, and 4th editions of BTZS. Based on a quick scan of the film speed sections, I think the fractional density method is supposed to reflect the fractional gradient method more than Delta-X. Delta-X is an easy method to use with programs and a table with the ΔX and ΔD values could have been included in BTZS to manually apply. Why not use it? In the 1st edition, Davis doesn't mention fractional gradient and he doesn't hint at Delta-X until the 4th edition even though the 3rd edition was published 5 years after the Sensitometry Primer articles. Personally, I don't think he knew about Delta-X when he initially wrote BTZS, Why keep the fractional density method with it's constant factor that is arbitrary when a better method was know? Maybe he couldn't make the necessary adjustments because his system was already ingrained in the minds of the readers and the change would have been too disruptive to the established system. Maybe he just thought the fractional density method was close enough and didn't warrant an adjustment.

I think it's like his use of "Subject Brightness Range." Sometime after the 1st edition was published, the term was replaced by "Subject Luminance Range," but Davis never updated it in either the books or programs even though he included a note in the later editions acknowledging the discrepancy (page 28 4th ed). SBR had become part of the BTZS terminology and was part of the program, so he continued to use it.

Maybe these are the types of questions every author has to answer. What to include and what to exclude. How much information is necessary to adequately make the author's point. Ralph, I'd be very interested hearing your perspective writing your book.
 

ic-racer

Member
Joined
Feb 25, 2007
Messages
16,549
Location
USA
Format
Multi Format
The thing I never liked about the BTZS was the speeds going all over the place. I don't do that in my own work. Changing speeds creates too much havoc in my work flow. In addition the BTZS method has the un-intentional side effect of supporting the premise that 'push processing' increases film speed.
In real life terms my 'basic' development is for a full range subject, for which I have a known meter setting (speed). For scenes with less range of values, the BTZS system offers 'extra' speed which I ignore.
Since one can safely ignore extra speed, it is not needed to justify this. However, for the sake of having a discussion, one could make the case that film speed is an intrinsic property of the film and it does not change with development (with exceptions). Likewise, 0.3G, Delta-X and W speeds predict the same speed independent of development (with exceptions).


Figure 7 below is from the linked documents in Steve's original post showing how the BTZS speed point (based on Fractional Density) increases with increased development.

 
Last edited by a moderator:

ic-racer

Member
Joined
Feb 25, 2007
Messages
16,549
Location
USA
Format
Multi Format
Just to flesh out the argument, one might ask how I would deal with the condition where contrast index is very low and BTZS prescribes a loss of speed. My contention is that unless one is including light sources in the scene, the film plane density range during exposure never gets high enough (due to flare) to justify such a shallow gradient.
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…