markjwyatt
Subscriber
The word that describes what you're talking about in point 1 is "telecentricity" (not collimation), see https://en.wikipedia.org/wiki/Telecentric_lens#Image-space_telecentric_lenses The detectors used in digital cameras are more sensitive / have fewer artifacts when light is incident near - normal to the detector and not at a glancing angle. Film doesn't care as much. It is not clear to me how much of this is due to a surface effect in the detector and how much is the Bayer color filter array. I think a lot of it might be the filter array, because this is related to why some lenses have more purple fringes. Anyway, as film_man said, it's worst for lenses very close to the film, like RF lenses on mirrorless. Even old wide angle lenses on SLRs have to be retrofocus, so they're closer to telecentric already and suffer less from this issue.
Optical design has indeed advanced greatly since the 1970s, but even then they were using computer optimization. In the lensrentals blog linked above, the lens they disassemble is a Canon RF 100-500/4.7-7.1 with autofocus, internal focus, and image stabilization, and uses 20 elements in 14 groups, including 6 low dispersion elements and 1 "super-ultra-low dispersion." Obviously, such a lens would be unthinkable in the 70s, and I'm sure it performs much better than the monster-size super telezooms of the late 70s or so. Interestingly, though, I think it's still the same basic zoom design of roughly 4 moving groups - shown in https://www.pencilofrays.com/lens-design-forms/#zoom
But if you look at today's 50mm/1.8 lens, it's not really that different from a 50mm/1.8 lens of the 70s, 6-elements based on a double Gauss. It doesn't need to be grossly different; maybe the latest version is slightly improved, but same general design. The people who designed those lenses in the 70s had an intuitive understanding of how to optimize a design, in addition to computer programs that look primitive now but were advanced for their day. A post like this from the Nikkor "Thousand and One Nights" gives some non-technical insight into their process: https://imaging.nikon.com/history/story/0060/index.htm
Thank you. It is not quite as simple as I thought. I was thinking more of this type of pixel model, but this is more about electron wells than photon... https://cloudbreakoptics.com/blogs/news/astrophotography-pixel-by-pixel-part-1
This also illustrates it for a CMOS sensor: https://www.svs-vistek.com/svs-images/news/svs-newscont-pixel_with_microlens-id261-511.png (Article: https://www.svs-vistek.com/en/news/svs-news-article.php?p=shading-with-cmos-cameras), which indicates sensor design is used to improve this.
I wonder if that Nikkor lens is telecentric? I suspect not.
I think the point still stands that film is less sensitive to these effects than digital sensors, and modern engineering practice can compensate.
I am not trying to turn this into a digital vs. analog thread. But when the point is made that modern lenses are superior to older lenses, there are a number of reasons why that is the case, and some of those reasons apply to the necessity created by swapping film with digital sensors. Others of those reasons apply to both analog and digital photography. No doubt lenses have imporved. Materials have improved, coatings have improved, computational power is exponentially higher. For technical photography these improvements are enabling. For artisitic it may depend on the application whether it matters.
Last edited: