How to create adjustable RAW files from color negative scans

Street Art

A
Street Art

  • 1
  • 0
  • 22
Time a Traveler

A
Time a Traveler

  • 5
  • 1
  • 54
Flowering Chives

H
Flowering Chives

  • 4
  • 0
  • 73
Hiroshima Tower

D
Hiroshima Tower

  • 3
  • 0
  • 66
IMG_7114w.jpg

D
IMG_7114w.jpg

  • 3
  • 0
  • 93

Recent Classifieds

Forum statistics

Threads
198,213
Messages
2,771,131
Members
99,576
Latest member
Gabriel Barajas
Recent bookmarks
1

OzJohn

Member
Joined
Feb 15, 2011
Messages
302
Format
35mm
Are we sure that what Vuescan writes out from an Epson scanner isn't the Raw data from the sensor? Sure, it's not in the Bayer array, but the sensor for the Epson has three color elements for each pixel so it could be looked at as raw data.

Using Vuescan it's obvious that what is stored on the disk as a raw file is what the software uses to perform an adjusted scan. In other words all the sensor data (including the IR channel, if scanned) is captured. Then Vuescan processes this in the same manor as Adobe processes a camera generated Raw file (but without needing to do the demosaicing step, but it does need to apply the dust correction step).

An interesting take on this issue and one that is difficult to refute. When I think about it Nikon film scanners, used with Nikonscan software, offer the option to save as a NEF file. These raw files, as far as I can tell, are indistinguishable from the raw files that come out of Nikon cameras. Your theory therefore begs the question: Does Vuescan do the same thing with the sensor data as Nikon does except that Vuescan calls the file a TIFF and if so is a NEF file not much more than a tricked up TIFF? Maybe David is technically correct. OzJohn
 

pellicle

Member
Joined
May 25, 2006
Messages
1,175
Location
Finland
Format
4x5 Format
Are we sure that what Vuescan writes out from an Epson scanner isn't the Raw data from the sensor?

yes, because you are not getting a distinct pixel per each of the R G or B samples, you are getting an RGB pixel. Open the file and see.

But what I'm saying is that's meaningless, as the scanner is scanning a much more controlled source. Film will have a maximum and a minimum brightness range which is nothing like what a camera sensor is exposed to.

Also, the reason we want RAW from the camera sensor is because its either JPG (which has already been processed down to 8 bits) or its (in very rare instances) a TIFF (which is still usually 8 bits). With a scanner you can get a high bit output (we say 16 bits, but its typically between 11 ~ 14) as a TIFF.

So what I am saying is simply this
"Scan to TIFF or scan to memory in photoshop in 16 bit mode is all you are going to get. There is no difference between so called raw mode on the scanner. All the raw mode is is you not applying any curves to that output scan"

you don't need any special instructions to do that.

Further adding the "open in ACR because its non destructive" is nonsence. You are using ACR (Adobe Camera Raw) as a tool for which it was not intended, and for which similar and more powerful tools already exist in Photoshop.

Essentially this is for people who grew up on digital cameras and know nothing about film scanning and are confused into thinking it offers 'simplicity and non-destructive edits' when it is just another (more complex method) of the same thing.

Also, with respect to "the orange mask" there seems to be little in photography that is so misunderstood as this. The simple act of adjusting your scan levels to capture the relative density ranges of the RGB which exist in film by design will "remove it".

Colour neg has different density responces to light for each channel
fig1.jpg


so when you set your scanner for that
RGB-Histo.gif


and bring that into a file or into photoshop you get an image which looks remarkably like a simple negative of any colour file.

step1.jpg


a quick inversion and then a more sensitive adjustment of final levels gives:

step4.jpg


then you can play with curves

step5.jpg


recall that negative was designed to be printed (not projected), so the act of viewing a print changes the density curve by simple optical physics laws. If you are confused by this, please print any print you like onto paper then onto transparency film (just make sure your printer does not do any adjustment for transparency which it may do).

Lay the film onto paper and it will look similar to the print, but will look quite different on a light box. Print viewing is reflected light, transparency viewing is projected light.

(the above images come from my blog post here: in my view ...: quick negative scan tutorial ... )
 

pellicle

Member
Joined
May 25, 2006
Messages
1,175
Location
Finland
Format
4x5 Format
Hi

An interesting take on this issue and one that is difficult to refute. When I think about it Nikon film scanners, used with Nikonscan software, offer the option to save as a NEF file. These raw files, as far as I can tell, are indistinguishable from the raw files that come out of Nikon cameras.

I believe that you will find that they are simply TIFF images. Discussion "way back when" on this can be found on the net if you look around and you'll find that the only significant difference is that Nikon applies a proprietary version of LHZ compression to reduce file size AND that the digital ICE Infra Red channel is recorded separately so that you can access the data without the subtraction of the dust reading data. But as I understood they are quite unlike camera raw files in that each pixel is not a R or a G or a B pixel which has 16 bit grey information only. Each pixel is already 48 bits of RGB data.

PS:

from http://www.luminous-landscape.com/forum/index.php?topic=12720.0;wap2

Lightroom, the recent monstrosity of Adobe, is not your solution to processing scanned film raw files!!! They are called raw, they may have also a suffix *.NEF, but they are not "camera raw" files! They are rather RGB lossless files, which you cannot even process in Lightroom so far. It comes worse: NikonScan is a very naive program concerning Color Management or image processing ability. You mileage may vary, I made myself cozy in the following workflow:
 

L Gebhardt

Member
Joined
Jun 27, 2003
Messages
2,363
Location
NH
Format
Large Format
I'm not sure it matters whether the image data is stored in a 48bit structure containing 3 16bit channels or as 3 separate 16bit channels. I imagine it depends on how the scanner returns it. But either way it's just semantics.

I think the important part is that you save the image out without applying a gamma curve, or adjusting the black and white points of the channels (assuming the scanner doesn't use that to adjust the scan). Then you have a raw starting point. I personally find it easiest to adjust things in Photoshop pretty much as you describe.

I haven't evaluated his workflow, but I do think there is some merit to capturing the data as close to how it comes off the scanner as possible. If that is done you should be able to use the software of your choice to process it in a consistent manner.
 

dslater

Member
Joined
Dec 6, 2005
Messages
740
Location
Hollis, NH
Format
35mm
I understand your confusion. the vast majority of RAW files are generated by cameras and are called RAW files. These are not in fact TIFF files, although what is in the Tagged Image File Format is just containers of data. So while the public may get the idea that its just "data" there is more to it than just that.

Digital camera RAW files store information acquired from the camera sensor from the scene. Since the cameras mostly have a BAYER Array they are stored as (say) 14 bit (although this varies) counts of the data the sensor captured. Each "pixel" of the RAW file is either Red Green (and another green) or Blue data. To view this as a image it must be converted in a process called demosiacing where a pixel is created in the middle of the array formed by the GB then RG pixels.

NB in this diagram below, these grey pixels are created and are what you see. They are created based on the RED GREEN and BLUE neighbor information.

8678342683_69238064d1_b.jpg


This is quite unlike what happens with a scanner. So if you choose to mush them up into the same concept you will be missing information in your understanding.

I hope this helps.


I'm going to nit-pick you here. The demosiacing process doesn't work as you've described. First, the pixels of the raw file are all grey. The bayer array is an array if filters in front of the pixels. Second, the demosiacing process doesn't combine the existing pixels into new grey pixels. What it does is to interpolate color information only to get the correct color for the existing pixels. The diagram looks more like this:

bayer.png


Note, the only thing interpolated here is color information - not spatial information. Also, note the more sophisticated demosiacing algorithms consider color information from more than just the immediate pixels to reduce problems with aliasing ( Moiré )

Finally, none of this has anything to do with what a RAW file is. Consider the Foveon sensor, in this case, each pixel has full RGB information and demosiacing isn't necessary. However, they still produce RAW files. The fact that the output from a scanner is in a different format than from a camera doesn't make it any less a raw file. What makes a raw file raw is the lack of processing of the image data by the device or the driver software. i.e. the pixels in the file should reflect the actual measured data recorded by the sensor.

Also, while camera raw files aren't strictly TIFF, they generally only differ by what's in the header information. The bulk of what's in the file is still in a TIFF format as that's the format the camera manufacturers started with when designing their raw files. This is in fact a testament to the flexibility of the TIFF file format.
 

pellicle

Member
Joined
May 25, 2006
Messages
1,175
Location
Finland
Format
4x5 Format
good morning


I'm going to nit-pick you here. The demosiacing process doesn't work as you've described. First, the pixels of the raw file are all grey.

well, yes of course, but they are grey representations of what is the Green channel, the Red channel and the Blue channel. Of course they are grey

The bayer array is an array if filters in front of the pixels.

correct, my simplification was intending to represent that. You are of course correct, and its good to clarify that point. But that is more or less exactly what I said.

Second, the demosiacing process doesn't combine the existing pixels into new grey pixels.

well actually it combines them into RGB pixels, which is what I thought I said. I represented them as grey as grey is in the colour world a combination of R G and B ... you quoted me even:

although what is in the Tagged Image File Format is just containers of data.


What it does is to interpolate color information only to get the correct color for the existing pixels. The diagram looks more like this:

bayer.png

it does indeed interpolate, but the strategy to "create" a colour pixel in the spacial coordinates of one of the colour channels is only one stratgy ... there are more

Note, the only thing interpolated here is color information - not spatial information.

all is not as you suggest. For instance I recommend you read this:

Where have the Berries gone? | On Landscape


Some other good reading can be had here:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.24.881&rep=rep1&type=pdf

and also worth a read:
http://ivrg.epfl.ch/page-65577-en.html


Also, note the more sophisticated demosiacing algorithms consider color information from more than just the immediate pixels to reduce problems with aliasing ( Moiré )

for sure

Finally, none of this has anything to do with what a RAW file is.

but it does ... of course as you know there are many RAW formats, one for each sensor indeed ... and then there are formats like DNG, which contain the RAW data but are not themselves RAW formats.

You may consider it semantic.

Consider the Foveon sensor, in this case, each pixel has full RGB information and demosiacing isn't necessary.

close nbut not quite my understanding (since you are knit picking). The Foveon sensor has a sensor for R G and B at the same XY coordinates but they are still discreete R G or B ... so still they need to be combined to make a single pixel. This of course not demosaicing because there is no bayer array.

foveon_sensor.jpg


However, they still produce RAW files
.

exactly

The fact that the output from a scanner is in a different format than from a camera doesn't make it any less a raw file.

well ... if after all this you still see it that way then I guess that will have to stand then.

Also, while camera raw files aren't strictly TIFF, they generally only differ by what's in the header information. The bulk of what's in the file is still in a TIFF format as that's the format the camera manufacturers started with when designing their raw files. This is in fact a testament to the flexibility of the TIFF file format.

you have mashed together a few points in there which deserve to remain separate.

* if the header of the file is the only difference then it could be read in C the same way. I don't think this is the case.
* the RAW file is a container file but does not follow the same format as a TIFF file.
* the bulk of what is in a RAW file from a camera is not a still a TIFF, it is the representation of the array of the captured data from the camera sensors.

:smile:
 
Last edited by a moderator:

chuck94022

Member
Joined
Jan 11, 2005
Messages
869
Location
Los Altos, C
Format
Multi Format
mpeg is just a container of multiple channels of data. I guess then the only difference between it and tiff is changes to the header and differences in encoding the bits. By that logic, mpeg is really just tiff. I don't think I'll get it open in ACR though.

Making a change to the structure or content of a file header and/or changes to a file format's encoding of data makes it a substantially different beast. You can argue over whether it "is" or "isn't" the same thing, but this is sort of like counting the number of angels on the head of a pin. The bottom line is that changes mean a processing program like ACR, or Aperture, or Lightroom, or Photoshop, can either read it or they cannot. Unless a software developer is particularly brilliant and able to predict the future with clarity and perfection, it is highly unlikely an existing piece of code will continue to work when header or data formats change.

All the rest is rather picayune.

(I'm amazed at how wrapped around the axle this topic has gotten... but hey, dpug needs the traffic! Keep it up! ;-) )
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom