Before commenting, I assume you are referring to multigrade papers and have taken into account the spectral responses of the emulsions... Please confirm and clarify.
Seems like a reasonable solution. True test is to make some prints and look at them, comparing the two methods.it seems to me that grade should be based on the slope of the central portion of the paper's HD curve.
I was hoping you'd graph the problem. Not really sure where you got the 0.07 from. LER is based on 0.04 over Paper base plus fog. Anyway, there's a paper by Jones from the mid-forties that evaluates how to determine matching a negative to paper. It's been an awful long time since I've read it, but Jones concluded, “because of the influence of the brightness distribution and subject matter in the scenes photographed, an accurate prediction cannot always be made of the exposure scale (Log Exposure Range) of the paper which will give a first-choice print from a negative of known density scale (Density Range)… But what other course is there to follow? Either we must make the best of a somewhat imperfect relationship or face the prospect of having no criterion whatever for choosing the paper contrast grade.”
True test is to make some prints and look at them, comparing the two methods.
The problem here is defining one point as 0.9 of Dmax. If the point was defined as 1.7OD then there wouldn't be much of a problem. Anything above 1.5OD is pretty much wasted when a print is viewed in normal room illumination.Okay, here's a diagram showing HD curves of two papers having the same contrast, but different Dmax...
Okay, here's a diagram showing HD curves of two papers having the same contrast, but different Dmax. We see that the differing Dmax values result in different LER values, resulting in different grades, though the contrast is the same.
View attachment 322510
The multiplier of 0.07 is the fractional distance along the density-range for zone VII, assuming Dmax=2.1, and was computed as follows:
frac = (Dmax-density)/(Dmax-Dmin) = (0.19-0.05)/(2.1-0.05) = 0.068
BTW, I goofed in my OP. The other multiplier of 0.10 (for zone III) should be 0.24.
I'm not sure if I'm going to say this right. It's not the best idea to mix the Zone System with sensitometric testing. Do the sensitometry first and then add Zones as a reference.
It's not the best idea to mix the Zone System with sensitometric testing. Do the sensitometry first and then add Zones as a reference.
@albada I like your system. It lets you work out a practical way of getting optimal results. Please, allow me to share the system that's currently implemented in my program.Perhaps it will help figure out the apparent problem you mentioned in your opening post.
It's based on the Darkroom Automation system (by projection printing test samples and measuring iluminance levels with an an-easel exposure meter), but it's not identical. It's a system that allows practical assessment of paper's response to exposure and how it relates to both paper grades and Zone System zones, and of course, film performance. Yes, I know that it's, technically, not a good idea to mix the Zone System in, but a lot of photographers do use the Zone System on a daily basis, and it can, at least for my purposes, be useful to include it. It all depends on what one is after.
If one wants to work strictly within sensitometric theory, then the approach below is not ideal. However, if one wants to evaluate paper performance practically, then it works nicely, at least for me. I also want to point out that the goal here is to come up with paper grades empirically, rather than to comply with an a priori definition of paper grades. As far as I know, the ISO standard does not include paper grade definitions. Also, the 0.09 value comes from 0.04 over B+F, with the densitometer zeroed on the reflection plaque. You'll see that the Oriental Seagull VC FB Glossy does not have a perfectly linear response to Ilford's under the lens filters. This could be easily remedied by means of a dichroic enlarger head (or a modern LED-based system) and customized filtration, which my program helps figure out.
This system should work nicely with a precise, LED-based, enlarging system, like the one @albada has. It also works best with the f-stop timer method, but can be adapted to linear timers.
Currently, my program computes the relevant parameters and outputs an Excel spreadsheet with three sheets embedded in it: (1) paper speed table, (2) paper grades table, and (3) paper contrast table. Please, let me know if this could be useful to you, and if not, how you'd like it changed/adapted to your system. I will be glad to add whatever analysis type is required by darkroom practitioners.
It looks like Photrio won't let me attach an .xlsx file. If you'd like to look at it, please DM me. Sorry about that!
EDIT: I created 3 images from the xlsx file
View attachment 322636View attachment 322637View attachment 322638
It's a kind of hybrid method, which includes both existing standards, the Zone System, and personal/practical tools. Yes, it involves measuring illuminance levels on the easel and looking up the required contrast, exposure, base, dodging, burning, split-grade printing, etc. My program generates a spreadsheet that can be printed out pinned to a darkroom wall, along with a timer sequence to make "chips" that show the exact tone one's going to get by following the data. it looks kind of like that. There are three sheets inside the document with different looks at the same data, so to speak.I just looked over the files in the Darkroom Automation website, and I see the similarities. In both systems, you (1) measure the illuminance of an element on the easel, (2) look up an exposure on a graph based on desired grade and zone, and (3) subtract the two numbers to get the time (and/or f-stop adjustment). Or did I misunderstand?
Cool! With the DA timer, you can store more than one element in memory and program a sequence based on them, but it's not exactly what I think you're describing.With the LED/microcontroller system, software performs steps (2) and (3). In effect, I tell the controller, "An element measures 3.7; place it on zone 6." And it does so, using the current contrast and paper-type. And capability #2 is to place two elements on two zones, setting both exposure and contrast. I don't think anyone has done this before.
That's exactly my motivation, too. I want a tool that's based on cogent principles but is practical and easy to use.Like you, we want a system that's easy to use and practical. That means using zones because they provide us with an intuitive gauge of density, unlike log-densities or Munsell values. However, zones are not needed to calculate grades, which is the topic of this thread.
When I re-read my post, it seems to say that there's no ISO standard for paper grade. That's not exactly what I meant. I meant there's no ISO standard for paper grade filtration. So it's more useful to me to figure out what my paper grades are based on my hardware and materials, rather than trying to force my setup to conform to an ISO (or other) standard. It's just a personal preference. Nothing wrong with either approach.BTW, your middle graph has steep slopes at both ends. I guess those are a consequence of the zone-densities you're using. Could you post your zone-densities, and how you got them?
Your right graph is ISO grade. How are you computing those from log range? Are you using the cubic polynomials in Way Beyond Monochrome? Or something else?
When programming my LED-head controller, I discovered that if a paper has a low Dmax, its grade is higher than a paper having a normal Dmax and (importantly) the same contrast in the midtones. Same contrast; different grade. We have a problem.
This anomaly occurs because grade is based on LER, which is the log of exposure range corresponding to densities 0.09 and 0.9*Dmax. Because the paper's Dmax is low, its LER is lower, causing its computed grade to be higher.
This is a problem with a LED controller because if you specify grade 2 (for example) for your print, you'll get different contrasts for Ilford (Dmax=2.10) vs Foma (Dmax=1.87). In effect, the term 0.9*Dmax lets Foma cheat and claim higher contrast than it actually achieves.
In order for different papers to print at the same contrast (and thus look the same) for the same grade, it seems to me that grade should be based on the slope of the central portion of the paper's HD curve. I see two ways to do this:
* Determine LER between fixed densities of zones VII to III (i.e., densities of 0.19 and 1.61). This solution assumes 1.61 is off the shoulder for all papers. Grade can be determined by LER as in the past, but the thresholds for grades would change.
* Determine LER between variable densities computed as fractions involving Dmin and Dmax, and base grade on the slope, as illustrated below.
View attachment 322502
The constants 0.07 and 0.10 match zones VII and III when Dmax=2.1, but we can use 0.10 or somesuch for both -- I doubt it matters. Grade would be determined by slope and not by LER.
I believe this anomaly has not been a problem in the past because most people use tungsten lamps with filters, and as long as Foma yields the same visual contrast as Ilford with the same filters, users are happy. But when we apply the standard definitions to make LER (and thus grade) the same between papers via a microcontroller, the papers don't respond with the same contrast. I think this problem will become more serious as LEDs become more popular.
Your thoughts about this problem and my proposed solutions?
@albada I went back and re-read the Way Beyond Monochrome paper calibration chapter. Their approach is solid, and is their data. If you implement their approach, I you're going to get good results. It's different from what I do and what DA does, but, in principle, the results should be very similar. It's a terrific resource.
With all due respect to Ralph, while most of the chapter is correct, the Subject Zone Scale in fig 1a-1c for example do not incorporate camera flare and therefore aren't representing reality. Not a big deal if you are illustrating an idea, but if you are working with a tone reproduction diagram or program, your results will be skewed.
I was actually referring to a different chapter in the book. In my copy, it is on pp. 59-76. I only mention that to avoid confusion.
I hope this is not off topic, but I'd like to ask @Stephen Benskin if you change anything in your analysis when you add the Zone System display along the top. You showed a plot at some point showing the zone rectangles along the top of Q1 or in-between Q1 and Q4. Or is it just a "cosmetic" thing to include the zones in your tone reproduction cycle or are you adjusting your analysis to account for the zones?
May I ask if you built your own LED enlarger head and a timer/analyzer/controller? That would be beyond awesome. I was meaning to start a project to design an f-stop timer/analyzer when DA was going to stop making his, but now that they're available again, I don't feel motivated. It would be a pretty big undertaking.
This is beyond cool. My sincere congratulations. It looks like a very capable device.Yes, I built both a LED-lamp and a controller for it. Here's the controller:
Basic features include changing brightness of red, green, and blue LEDs (top row of display), and time, base/dodge/burn/untimed modes, and grade (bottom row of display).
Sophisticated features include various ways of setting exposure (ChgExp button), generating and analyzing test-strips (Strip button), a development-process timer with temperature-compensation (Process button), and misc. settings (Menu button).
More features include white-light for focusing, composing, and metering (White button), and red light for positioning a dodge/burn tool before starting exposure (Red button).
In any display, you can change a number by selecting it with the corresponding white button and turning the "Numeric" knob.
Okay, now I understand it better. The spreadsheet I showed earlier, along with the chips, is exactly what you're after. It doesn't have to be used that way, but you can calibrate it so that each paper produces roughly the same shade of gray with its unique filtration and exposure. Given the lack of standardization of filtration, it's something that one needs to build from the ground up. The difference, I think, is that it standardizes on the tone, not the grade. It's probably just a philosophical difference, such that the question is "It would be nice if a given tone looked as similar as possible on different papers, with a exposure and filtration automatically computed by the controller, " rather than the question you asked, if I am understanding it correctly.The "Grade" item in the display is what this thread is about. It would be nice if a given grade looked as similar as possible on different papers having different maximum/minimum densities.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?