MsLing
Member
The second plot I think comes with some caveats. It seems to represent the combined response across all three RGB channels with a particular Kolari filter fitted on the lens. This filter likely has a non-linear response to compensate for certain peaks & valleys that result from crosstalk also/in particular in the case of a modified (IR filter removed) camera. So I'm not sure how representative it is in the context of this thread.
In fact, the second picture shows transmission curves of those camera's original filter. I use it because I didn't find a single curve about new Sony camera's UV-IR, or seriously, NIR cut filter. Canon hasn't released any information about its CMOS products so it's impossible to know those sensors' feature. The reason I show these transmission curves is that some high CRI light sources emanate some waves beyond 700nm, and that's why we should consider UV-IR cut filter's influence. E.g. halogen lamp and COB LED I used for my slide projector are able to give off some infrared. I show its datasheet below but sorry no English version. (BTW, my machine projects all my slides in a neutral, natural and beautiful way. But it shouldn't be discussed in this thread)
Note finally that the kind of curves shown here can only be constructed on the basis of measurements to which in-camera processing is already applied to normalize the signals. We cannot really know what kind of math (potentially fairly complex, even) is applied to the raw sensor data as this is manufacturer-proprietary information and occurs inside the black box of the onboard image processor. I suppose that if a datasheet of the actual sensor is available (which is probably most often not the case for consumer/prosumer cameras), it might be possible to at least crudely reverse engineer this black-box behavior to a limited extent.
As for this, I quote the first picture from Viework's datasheet, which claims that "The sensitivity data may not match the measurement on the finished product necessarily because it is measured based on the wafer". So I think it can represent IMX661 sensor's original data.
Again, for B&W acquisition it's not very relevant either way. Things get more interesting when digitizing color film. In that case, we enter a world of still conflicting views with reasonable arguments on either side of the divide (narrow-bandwidth vs. full-spectral width), but AFAIK the 'state of the art' in professional film scanning (e.g. for motion picture use) has for a long time been narrow-bandwidth, separate R, G, B captures because this gives the cleanest channel separation and thus is capable of extracting the most color information from the dye image.
Yeah, film industry has a much more serious and mature process for analog and digital. Both BMD and ARRI use RGB LED in their digitizing systems. Factories and labs can even choose prism camera to get a sharp and clear spectral distribution before. But ARRI and BMD turn to field scan system instead of line scan. I don't really know if they can fully overcome color crosstalk. Analogue process is not easy to learn and practice.
AFAIK, optical printers also use interference filter to emit pure narrowband light. I guess maybe one reason is that to some extent, traditional print films are a kind of narrowband material and use RGB can simulate print films' color somehow.
Anyway, I do plan to make a Status M LED system to take Epson V850's place for digitizing 35mm film and making positives.
Spectral sensitivity of a prism camera


