That's a reasonably good anecdotal test but - as you insist on rigor - certainly not fully "controlled" by any means for several reasons at the very least:
- Changing the shutter speeds for different exposure combinations usually introduces some speed error unless you used a known calibrated electronic shutter.
- You selected "the exposures that gave the best prints". Assuming you viewed the prints at the exact same distance and angle under identical lighting (which a controlled test would demand), the determination is still very much up to how you view what is "best". Perfectly reasonable, they're your prints, but this is hardly a rigorous or objective test.
- The only formal way I am aware of how to test all this is with a transmission densitometer for the negatives and reflection densitometer for the prints. This is the way to ensure that the negatives and prints were processed to the same contrast index. But even then, the print is always interpreted by the printer so - again - it's a subjective judgment about what print is "best".
- Beyond that, the CI doesn't tell the whole story. Different film/dev combos and development schemes handle mid tones rather differently. Mid tones rendering is a strong influence of how most people judge a print's goodness.
I would suggest that your tests were "controlled" only in the sense that you used a more-or-less constant process with the limitations noted above, but in reality you came to a good sense of the data
exactly like our OP did here. More specifically, you found out what
you preferred, not an objective performance measure.
(And here is my "shape of the data"...)
I have tested dozens of film/dev combos and measured the outcomes on a densitometer. Virtually every single combination hits the requisite FB+F for Zone I at an EI of 1/2 box speed, and preserves highlights for a normal SBR with about a 20% reduction in development time. I do this with a temperature compensating development timer I created that has tables built in to adjust timer speed by measuring the actual developer temp during process.
This is so consistent that I don't bother with densiometric testing anymore when faced with a new film/dev combination. I just assume it's true and check the first negatives for proper shadow detail and highlight preservation. Even if I am somewhat wrong, it just doesn't matter. If you are close, the combination of film latitude and split VC printing techniques pretty much always guarantee me a technically decent print can be made. Once the first negatives are in, I can tune EI and/or development time accordingly. I much prefer doing this to endless testing cycles gray targets in open shadow.
After 20+ years working this way, I realized that I still hated a lot of my prints
because of a lack of mid tone contrast separation. This is a big problem, especially in long SBRs where N- processing of some sort has been the recipe for many decades. The reduced development - even with an increase in EI - not only compresses the highlights as desired, it also clobbers mid tone contrast. It was that realisation that caused me to go down the high dilution/low agitation/very long development rabbit hole, but that's a story for another day.
I'm glad you found a way to home in on the prints you like. But, it's a fallacy to assume that people who don't work that way are sloppy or that their results are irrelevant. Especially in an electronically mediated world, where the currency of the realm is a glowing monitor image, "shape of the data" is about all any of us can share even for those of us willing to actually post our images ...
Vita brevis, ars longa, occasio praeceps, experimentum periculosum, iudicium difficile
- Seneca
Aut viam inveniam aut faciam
- Hannibal