Denverdad
Member
This is a follow-up to the earlier thread ((there was a url link here which no longer exists)) where we discussed the claim that processing old film at low temperature and high developer concentration helps to reduce fog. Since no one was able to point to any scientific data to support that claim, I have been on something of a quest to determine empirically for myself whether this technique actually makes any difference. After a great deal of experimentation I finally have what I think are some good data, and am looking forward to getting some feedback on my results and conclusions.
But first let me explain my test methodology. I'm using a modified version of the clip test described here, which is a method often suggested for estimating development times when dealing with old or unknown rolls of film. A key change I make is to cut the strip from the roll in the dark, and then cover half of the strip lengthwise before exposing the strip to room light so as to protect that half from being exposed. I then proceed with the rest of the stepped-development sequence in the dark. My thinking with this is that the the one side develops to a saturated density (i.e., Dmax) as before, but any density on the unexposed side will reflect fog only. Consequently it should be possible to determine the relative fog level as a function of development time/temperature/etc. after performing some densitometry on the strip. For reference, here is what such a strip (cut from the end of a roll of 120 film) looks like following development:
In this picture the top and bottom of the strip were the unexposed and exposed halves, respectively.
To test for the claimed temperature-fog effect with this method I cut two such strips from the roll, processing one at 68F and the other at much colder temperature (typically 40F) and high concentration. The basis of comparison for these measurements is the level of fog which arises when the film is developed to a given density (for the exposed part). As such I will show plots that are simply the density of the unexposed side vs. density of the exposed side, with the resulting curves revealing the trend of how the fog varies with increasing development. By plotting this data for the warm and cold developments on the same plot one should be able to see at a glance any differences in fog attributable to the development temperature
There are quite a few other technical details to discuss regarding this process, but for the sake of moving forward and showing some results, here are plots I"ve generated for three different rolls of film - 1950 Verichrome, 1962 Verichrome Pan, and 2013 Tri-X, respectively:
Some general comments and observations: In these plots the diagonal line provides a reference to the maximum possible fog - basically what you get if the density of the exposed and unexposed sides are equal -i.e., a completely fogged piece of film. Measured data points should all fall below this line, with a lower value of course meaning less fog (Note that the curved lines through the data are there only to lead the eye). Of the three films, the 1950 Verichrome clearly developed to the highest overall fog level. In contrast, the Tri-X had low levels of fog which increased only slightly with increasing development. This is what would be expected for fresh film and in fact I threw in the Tri-X specifically to provide a sort of experimental control to insure that the results were consistent. The middle plot of the 1962 Verichrome Pan reveals surprisingly low fog for a film this old, which supports the reputation that VP has for surviving very well over time.
So finally after all that, what's the answer to the original question - does development at cold temperature and high concentration actually reduce fog? In each of the cases shown you can see that the fog density was in fact generally lower following the cold development process than the warm, so I'm going to go out on a limb and give a tentative YES to that question! I'm sure that there will be questions about how much of a difference this really is since the difference seems so small, especially in the last two plots. But keep in mind that density is measured on a logarithmic scale, so a small difference can still be significant.
Any thoughts?
Jeff
But first let me explain my test methodology. I'm using a modified version of the clip test described here, which is a method often suggested for estimating development times when dealing with old or unknown rolls of film. A key change I make is to cut the strip from the roll in the dark, and then cover half of the strip lengthwise before exposing the strip to room light so as to protect that half from being exposed. I then proceed with the rest of the stepped-development sequence in the dark. My thinking with this is that the the one side develops to a saturated density (i.e., Dmax) as before, but any density on the unexposed side will reflect fog only. Consequently it should be possible to determine the relative fog level as a function of development time/temperature/etc. after performing some densitometry on the strip. For reference, here is what such a strip (cut from the end of a roll of 120 film) looks like following development:

In this picture the top and bottom of the strip were the unexposed and exposed halves, respectively.
To test for the claimed temperature-fog effect with this method I cut two such strips from the roll, processing one at 68F and the other at much colder temperature (typically 40F) and high concentration. The basis of comparison for these measurements is the level of fog which arises when the film is developed to a given density (for the exposed part). As such I will show plots that are simply the density of the unexposed side vs. density of the exposed side, with the resulting curves revealing the trend of how the fog varies with increasing development. By plotting this data for the warm and cold developments on the same plot one should be able to see at a glance any differences in fog attributable to the development temperature
There are quite a few other technical details to discuss regarding this process, but for the sake of moving forward and showing some results, here are plots I"ve generated for three different rolls of film - 1950 Verichrome, 1962 Verichrome Pan, and 2013 Tri-X, respectively:



Some general comments and observations: In these plots the diagonal line provides a reference to the maximum possible fog - basically what you get if the density of the exposed and unexposed sides are equal -i.e., a completely fogged piece of film. Measured data points should all fall below this line, with a lower value of course meaning less fog (Note that the curved lines through the data are there only to lead the eye). Of the three films, the 1950 Verichrome clearly developed to the highest overall fog level. In contrast, the Tri-X had low levels of fog which increased only slightly with increasing development. This is what would be expected for fresh film and in fact I threw in the Tri-X specifically to provide a sort of experimental control to insure that the results were consistent. The middle plot of the 1962 Verichrome Pan reveals surprisingly low fog for a film this old, which supports the reputation that VP has for surviving very well over time.
So finally after all that, what's the answer to the original question - does development at cold temperature and high concentration actually reduce fog? In each of the cases shown you can see that the fog density was in fact generally lower following the cold development process than the warm, so I'm going to go out on a limb and give a tentative YES to that question! I'm sure that there will be questions about how much of a difference this really is since the difference seems so small, especially in the last two plots. But keep in mind that density is measured on a logarithmic scale, so a small difference can still be significant.
Any thoughts?
Jeff
Last edited: