Hi
I have a question on the difference between the representation of data in RAW and Linear decoded TIFF. Clearly RAW is not demosiaced and has separate pixels for R G and B while in a decoded TIFF they are combined.
That issue aside, if I use a tool such as dcraw with options -4 -T, I thought that essentially it created a demosiaced file and that this was the only difference? Meaning that there should be little differences in quantization of light levels in a specific "pixel" between the two.
I ask this because when trying to load a single TIFF into Photomatix VS using the CRW file from my camera I get substantially different results.
I asked people at photomatix about this but found their replys to be typical of support which does not read the question and assumes you don't know what you're asking.
I have a question on the difference between the representation of data in RAW and Linear decoded TIFF. Clearly RAW is not demosiaced and has separate pixels for R G and B while in a decoded TIFF they are combined.
That issue aside, if I use a tool such as dcraw with options -4 -T, I thought that essentially it created a demosiaced file and that this was the only difference? Meaning that there should be little differences in quantization of light levels in a specific "pixel" between the two.
I ask this because when trying to load a single TIFF into Photomatix VS using the CRW file from my camera I get substantially different results.
I asked people at photomatix about this but found their replys to be typical of support which does not read the question and assumes you don't know what you're asking.