Silverfast 8: 16bit grey scale?

Brentwood Kebab!

A
Brentwood Kebab!

  • 1
  • 1
  • 86
Summer Lady

A
Summer Lady

  • 2
  • 1
  • 114
DINO Acting Up !

A
DINO Acting Up !

  • 2
  • 0
  • 67
What Have They Seen?

A
What Have They Seen?

  • 0
  • 0
  • 80
Lady With Attitude !

A
Lady With Attitude !

  • 0
  • 0
  • 66

Recent Classifieds

Forum statistics

Threads
198,782
Messages
2,780,772
Members
99,703
Latest member
heartlesstwyla
Recent bookmarks
0

Wallendo

Subscriber
Joined
Mar 23, 2013
Messages
1,409
Location
North Carolina
Format
35mm
In SilverFast lingo, the main difference between "RAW" scans and "images", is that "RAW scans" are negative images and "images" are positive and also allow dust removal.

I prefer SilverFast for for scanning C-41, VueScan for scanning slides, and it really doesn't matter which I use for B&W since there is no need for color correction or infrared dust removal.

When scanning B&W I scan at 16 bits. If scanning at 8 bits, the scanner firmware or computer software will convert the actual scann data from the scanners bit depth down to 8 bits. If that conversion is going to happen, I would prefer to control that step myself.

To do this with entry level versions of SilverFast, scan in 16-bit RAW mode. I then load the image into PhotoShop and Invert the image and adjust the black and white points using the "levels" control, I then load the images into Lightroom for final manipulations. I have my scanning software automatically open all files into PhotoShop so it's not as awkward as it sounds. It is possible to load the RAW scans into Lightroom and invert then using the curves window, but I don't like to do that since all the develop module sliders tend to work opposite from what I am used to.
 
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
The important thing is whether there is noise in the scan. The noise could come from a combination of film grain and other sources. The other sources could include (but not limited to) sensor noise or even shot noise.

Shot noise comes from the quantized nature of light in combination with the statistics of photon detection, and in theory it can show up at low signal levels. I don't know if shot noise is a factor in scanners, but I would not be surprised if it is. To give you an idea of how the statistics work, the standard deviation is equal to the square root of the number of photons. For example, if 100 photons are detected then the standard deviation is 10 photons. Suppose that the analog to digital step size at low signal levels is equivalent to ten photons. In that case shot noise would be more than enough to produce effective dithering. It would also apply at high signal levels. For example, if a high signal level were equivalent to detecting 10,000 photons the standard deviation would be 100, so if the step size were 10 photons it would be more than enough to produce effective dithering.

Sensor noise (as distinct from shot noise) is very likely at low signal levels. This can come from various sources, primarily electronic in nature. For example, there is something called thermal noise, which comes from the random thermal motion of electrons. There is also something called flicker noise. Anyway, a lot of scanner reviews talk about the existence of noise in the shadows. This probably comes mostly from some combination of sensor noise and shot noise.

None of the noise sources mentioned above go away at high signal levels. They become proportionally less important compared to the signal level if the signal level is high, but from the point of view of how it relates to the effective dithering effect the important comparison is not the comparison to the absolute signal level but rather the comparison to the step size of the analog to digital converter. This means that if there is effective dithering taking place at low signal levels it will also be there at high signal levels.

Things get a little more complicated if a non-linear transformation is applied to the signal somewhere in the signal chain. If so then it is possible that the effective dithering effect might go away due to roundoff error. I am told that when a scanner saves in 8 bit mode there may be a non-linear transformation, and I can't say to much about that possibility. However, I understand that in some signal processing systems when a high-bit word is converted to lower bits the software may add pseudo-random noise in order to make sure that dithering is present. I don't know if this applies to 8 bit scanners.

Now to the question of t-max 100: here is one way you can investigate this experimentally using your own scanner and your own photographs. I'm not talking about your regular photos but rather photos generated specifically for the purpose of testing this. Get some t-max film. Find a perfectly uniform object to photograph, like a blank wall. It will also help if you will defocus the lens to make sure there's no small-scale variability in the image. You might even consider taking the lens off altogether. Take photos of the image. Some at high exposure, some at moderate exposure and some at low exposure. Develop the film.

Scan the images in 8 bit mode. Open a file. Look at the histogram of crop of a very small portion of the image near the center of the image. Zoom in on the histogram (along the horizontal axis) and see if there is just a single spike in the histogram or if you can see several spikes. If there are several spikes then there is significant noise in the image (whether from film grain, sensor noise, or some combination), and that should satisfy the effective dithering requirement. Do this for the blank images that you photographed at various exposure levels.

You could also take a photo of an image having a smooth brightness gradient. Scan in 8 bit mode. Copy the image to form a second identical file. Open one of the copies and do some extreme image manipulation until you can see banding. Next open the second copy and convert it to 16 bits. (That conversion is very important. Leaving it in 8 bit mode will nullify the test.) Next do exactly the same extreme image manipulations that you did on the first file. Do you see banding? if not then the 8 bit scan hasn't hurt anything and you are good to go.

Always remember, if you scan your regular photos in 8 bit mode then be sure to convert them to 16 bit before you do any image manipulation. Keep it in 16 bet mode, preferably forever, but if not forever then at least as long as you are doing any image manipulations on the image.

Now for a question: Why would anyone even want to scan in 8 bit mode? The first answer is that some systems may only allow 8 bit scanning. Leaf brand scanners when used in certain configurations fall into that category. The other reason (not a very strong on these days) is that 8 bit scans save storage space.
Thanks, Alan, I'll just continue to scan everything at 16 bits and not worry about the storage or banding. The only issue is that for some who own Elements or other similar programs, they only operate for the most part in 8 bits. So I use Lightroom for my editing which handles 16 bit.

Regarding Tmax 100 vs Tmax 400, I have to assume it really is more grain in the 400 that you can see in the sky. I assume other than that the film stocks are the same. Actually, I kind of like the grain look in the Tmax 400 scans. I assume it's more pronounced as well in smaller formats than the 4x5 sample I posted.
 
Last edited:
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
Alan, this is a very interesting response. When I scan black and white negatives, I always use 16 bit, and sometimes do see what looks like noise in the shadow areas. This is worst with thin negatives. My Plustek 7600i scanner allows for multi scan, which is supposed to reduce noise. I am not sure how effective it is. My Minolta Scan Multi scanner lets you choose one, two, four, eight, and 16 multi-passes. I usually use four, but even here, I am not sure how effective it is in reducing noise, or if I am really seeing anything different. I use Silverfast 8 to operate both units.
Many years ago I read a comment that the dual scan kind of cancels out the grainy look as the two images are combined. I suppose you could do the same thing afterward with "blur" or sharpening edits.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
52,893
Location
Delta, BC Canada
Format
Medium Format
Regarding Tmax 100 vs Tmax 400, I have to assume it really is more grain in the 400 that you can see in the sky. I assume other than that the film stocks are the same. Actually, I kind of like the grain look in the Tmax 400 scans. I assume it's more pronounced as well in smaller formats than the 4x5 sample I posted
The two films have different spectral sensitivity and a somewhat differently shaped characteristic curve. T-Max 100 also has a UV blocking layer, while T-Max 400 does not.
The UV blocking layer makes T-Max 100 unsuitable for use with a lot of different alternative/traditional contact printing methods, such as cyanotypes and platinum/palladium.
Now back to our regularly scheduled scanning discussion :D.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
I'm following up on my posts last month discussing 8 bit scanning. I said that if you can see grain in an 8 bit scan then there is basically nothing to be gained by scanning at a higher ADC resolution, such as 16 bit. I discussed this in the context of whether banding would show in the scan.

Today I did an experiment. I scanned a tmax 100 negative using my canon fs4000us scanner and saved the scan in 8 bit mode. I opened the file with photoline and converted the image to 16 bit mode.

I cropped out a very small section selected from a dense featureless region of the negative. (It was a featureless sky region.) There was no gradient in the negative in that region. I then used the histogram correction feature to zoom in on the horizontal axis of the histogram sufficiently to see the digitization step size. I am attaching a photo that shows that small part of the image and the the histogram. (Note that the "grain" looks exaggerated in the image because I zoomed the horizontal scale of the histogram. This is for display purposes and also to make it easy to pick of the values from the peaks. It doesn't change anything in a fundamental way.)

Note that there are several peaks in the histogram. Treating the histogram as a probability distribution, I picked values from the histogram and did some statistics. The standard deviation worked out to be 1.64 digitization steps. That standard deviation is more than sufficient to eliminate the possibility of seeing banding.

Please note that I used rodinal for developing this negative. This implies that the grain will be somewhat greater then if I had used a low-grain developer. However, based on the result in this experiment I am pretty confident that even a low-grain developer will leave enough grain to eliminate banding in an 8 bit scan. Most negatives will be grainier than tmax 100. For an ultrafine grained image using, for example, microfilm filmstock developed in a pictoral mode it might be possible for the grain structure to be so fine that banding might be possible in an 8 bit scan. Slide film might also be a different story in some cases, though I couldn't say one way or the other. However, for normal black and white negative film stocks I'm pretty sure that 8 bit scanning is sufficient. However, be sure to convert to 16 bit before doing any image manipulation.
Histogram of dens region of a tmax 100 negative developed with rodinal.PNG
 
Last edited:

Kodachromeguy

Subscriber
Joined
Nov 3, 2016
Messages
2,054
Location
Olympia, Washington
Format
Multi Format
For an ultrafine grained image using, for example, microfilm filmstock developed in a pictoral mode it might be possible for the grain structure to be so fine that banding might be possible in an 8 bit scan. Slide film might also be a different story in some cases, though I couldn't say one way or the other. However, for normal black and white negative film stocks I'm pretty sure that 8 bit scanning is sufficient. However, be sure to convert to 16 bit before doing any image manipulation.
Alanrockwood, you have done a lot of work to scan at 8 bit and then convert to 16 bit before you do image adjustments. I assume you mean contrast adjustments and similar. Why not just scan at 16 bit and maintain that for the entire process? Also, you simplify bookeeping and file management.
 
Last edited:
Joined
Aug 29, 2017
Messages
9,446
Location
New Jersey formerly NYC
Format
Multi Format
Alan, you have done a lot of work to scan at 8 bit and then convert to 16 bit before you do image adjustments. I assume you mean contrast adjustments and similar. Why not just scan at 16 bit and maintain that for the entire process? Also, you simplify bookeeping and file management.
I do scan at 16 bit on 35mm, medium format and 4x5. My color scan files for 4x5 are around 600mb. But I was wondering if someone did scan at 8 bits, could they get banding if they cropped the image? Is it more of an issue with medium format? 35mm?
 
  • jtk
  • jtk
  • Deleted

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
What if you crop?
As far as the possibility of banding is concerned it doesn't really matter if you crop, as long as there is enough grain (or other forms of noise) in the scanned image. The grain (or other forms of noise) supplies a dithering effect that eliminates the possibility banding.

If one could scan a truly grainless image, and if the scanning process doesn't add noise, then this doesn't work, and banding becomes a possibility when scanning in 8 bit mode followed by extreme image manipulation.

It might be worth noting that if the post-scan image is truly noiselss/grainless then even if one saves in 16 bit banding becomes a possibility if the image manipulation is super-extreme. However, that's probably of no practical significance because the manipulation would need to be so extreme as to make the post-manipulation image relatively meaningless.

Also, it is worth noting that relying on grain to eliminate the possibility of banding works best if scanning at maximum spatial resolution. In principle, if the dpi is cut to a low enough value that the grain disappears blurring of the image then you can't rely on this effect to eliminate banding. This depends somewhat on how the scanner implements a lower dpi. If it simply skips over some fraction of the acquired pixels then there will still be grain or noise in the image. However, if it process the full-dpi image so as to average nearby pixels and then does the dpi reduction there could be a possibility of reducing grain below the limit that would eliminate the possibility of banding. I don't know if this would happen in practice, but it is a possibility, theoretically speaking.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Alanrockwood, you have done a lot of work to scan at 8 bit and then convert to 16 bit before you do image adjustments. I assume you mean contrast adjustments and similar. Why not just scan at 16 bit and maintain that for the entire process? Also, you simplify bookeeping and file management.
Yes, I mean contrast adjustment, or whatever manipulation might be applied to the image that would result in banding, though I think it all basically boils down to contrast manipulation.

There is nothing wrong with saving scans in 16 bit mode, but if it doesn't actually improve the image quality then there isn't much point in doing so.

Also, scanning in 16 bit mode requires twice as much disk storage. These days with cheap and very large hard disks available that's probably not too much of a problem for most people, but it could be for some people who have a lot of images.

You are right that it takes more work to use the work flow I described. I'd say it adds 5 to10 seconds to the time it takes to process an image, depending on how adroit one is with the mouse and keyboard to change the original image from 8 bits to 16 bits before starting the rest of the processing. As for cataloguing and such, I don't see this as adding much complication. I never delete the original scan, and I always use a file name change to distinguish an processed image from an unprocessed image, so there is no additional complication there. In any case, 8 bit images are distinguishable from 16 bit images easily enough because the file sizes are twice as big for the 16 bit images.

Anyway, one of the main reasons I have spent time on this is to counter the false notion that is often expressed that scanning in 16 bits is superior in some significant way compared to scanning in 8 bits. This point of view is often expressed by some people whenever the subject comes up. However, if there is grain in the picture it just isn't going to make a practical difference if you scan in 8 bit vs. 16 bit mode. However, if the grain is so innocuous that it doesn't show up in an 8 bit scan then this is no longer the case, and one would be better off scanning in 16 bit mode. I doubt is this is ever the case for normal photographs, but I wouldn't altogether rule it out.

Now I am going o admit that there will be a slight theoretical difference between scanning in 8 bit vs. 16 bit mode, and that has to do with how much noise there is in the image, but the difference is small. For example, if the standard deviation (noise) in the image is equal to one digitization step increment then the 8 bit image will have about 4% more noise than a high-bit image. This comes from something called quantization noise. However, this amount of added noise is so small that as a practical matter I don't think it is perceptually different from a high-bit scan. In other words, there is no practical difference, even though there is a theoretical difference.

By the way, quantization noise also shows up when a noiseless image is quantized. However, in that case the quantization noise typically shows up as banding. I am attaching an example of an image quantized in four bit mode, and you can see definite banding in the sky. The source of the image was from an article by Charles Boncelet, in The Essential Guide to Image Processing, 2009.
3-s2.0-B978012374457900007X-f07-07.jpg
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Hi:

I've had my Bronica GS-1 for just shy of a year and am finally getting to work on scanning the 12 or so B&W rolls I've developed. My computer was purchased as a Win10 machine but I've added Linux to a separate partition so I'm running both (at different times of course). My father, before he passed away, gifted me his Epson V600 that I'm running on Win10 (because Linux film scanning isn't that great at the moment). I tried Epson Scan software and it worked but something made me want to try something different. VueScan was better and then I tried the free Silverfast 8 and it is great, I'll probably buy version 9 SE or SE Plus.

However, in version 8, the grey scale option is shown as 16->8bit. I'm quite happy with the scans but I'd really like to scan to 16bit GS, not 8bit.

Does anyone know if version 9 scans to 16bit?

Thanks

If you already have a paid version of Vuescan, just use it. You can output a raw 16 bit greyscale file very trivially with it.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Besides banding, are there other advantages or disadvantages with 8 vs 16 bit?
8 bit scans will require only half of the storage space. That's the main advantage I am aware of. Actually, some scanner models only work in 8 bit mode with certain acquisition systems. I think this is the case for leaf scanners when using Silverfast software, but I'm not sure. In that case there would be an advantage to 8 bit mode because 16 bit mode isn't even available.


There might also be a speed advantage during scanning, but I haven't tested my scanner for that. However, if there is a speed advantage it would probably be during the transfer process from scanner to computer, and it would likely depend on the make and model of scanner whether 8 bit scans are faster . That would be something that would needed to be tested for a given system.

In general if one is interested in 8 bit scanning it would be a good idea to test it for a given film/scanner combination. The idea would be to have negatives with broad featureless regions of high density, medium density, and low density, and then do what I did using histogram adjustments in the processing software to see how many peaks there are in a very small patch of the scan. (That's probably more than you need to do. You can probably evaluate it for grain using just visual observation.)
 

grat

Member
Joined
May 8, 2020
Messages
2,044
Location
Gainesville, FL
Format
Multi Format
8 bit scans will require only half of the storage space.

That's not strictly true. Eight bit images will contain less data, true, but TIFF supports compression. You would think 8 bits would compress much better, as there are only 256 values being repeated, but if you split 16 bits in half, you've still only got a total of 256 values, just twice as many-- but the more repetitions you have, the better the compression is.

In short, there is a benefit to having only 8 bits per pixel, but not as much as you might expect.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
That's not strictly true. Eight bit images will contain less data, true, but TIFF supports compression. You would think 8 bits would compress much better, as there are only 256 values being repeated, but if you split 16 bits in half, you've still only got a total of 256 values, just twice as many-- but the more repetitions you have, the better the compression is.

In short, there is a benefit to having only 8 bits per pixel, but not as much as you might expect.
Very interesting.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Don't 16 bits also give better color information?


In a noise-free image, yes. In a noisy (e.g. grainy) image, no as a practical matter.

I should qualify this somewhat. To completely suppress banding would require grain to be present in each color layer. Otherwise you could get banding occurring in some color channels but not the others, in which case the banding would show up as weird distortions of color in the bands. for example, if there were no grain in the blue layer but there were grain in the other layers then banding would show up in the blue layer but not the others. In that case (for example) there could be shifts in hues across a region that should be a pure gray gradient.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Alan, this is a very interesting response. When I scan black and white negatives, I always use 16 bit, and sometimes do see what looks like noise in the shadow areas. This is worst with thin negatives. My Plustek 7600i scanner allows for multi scan, which is supposed to reduce noise. I am not sure how effective it is. My Minolta Scan Multi scanner lets you choose one, two, four, eight, and 16 multi-passes. I usually use four, but even here, I am not sure how effective it is in reducing noise, or if I am really seeing anything different. I use Silverfast 8 to operate both units.
If you use multi scans it is possible, under certain conditions, to negate the ability of noise to suppress banding.

Here I am going to break "noise" into two components, 1) noise that is truly random and varies from one scan to the next of the same negative. We could call that true noise and 2) something one might call pseudo noise, which is randomness spread across the image but which does not vary from scan to scan. Grain would be an example of pseudo noise.

If the image has no pseudo noise (e.g. grain) but has true noise (for example thermal noise arising in the electronic circuitry) then the true noise, if it is large enough, can suppress banding, but if you signal average (multi scan) enough to suppress true noise below a certain limit then there will not be enough noise in the resulting image to suppress banding.

If the there is no true noise but there is pseudo noise then there is no point in signal averaging because it it does suppress the true noise because there is no true noise to suppress. However, signal averaging (multi scanning) won't suppress the pseudo noise, which, if it is large enough, will suppress banding.

If both kinds of noise are present then signal averaging (multi scanning) can suppress the true noise (e.g. electronic noise), leaving only pseudo noise, and if the pseudo noise (grain) is strong enough it can suppress banding.

Note: when I say "suppress electronic noise" it is to be understood that you can't suppress it to zero. That would take an infinite number of scans. But you can improve it. Noise suppression of uncorrelated noise generally improves with the square root of the number of signal averaging repetitions. For example, if you scan 16 times one can expect the true noise to be suppressed to 1/4 of what it would be in a single scan. To suppress it to 1/8 would take 64 scans, so there is a law of diminishing returns at work.

One final comment. I am not 100% sure of the following, but I think it's true. Earlier in this post I said that signal averaging (multi-scanning) won't suppress pseudo noise (e.g. grain). However, I think in certain cases it could. In particular if there is grain aliasing then I think that multi-scanning with small random shifts in the image position, together with increasing the number of points in separate images during post-scan processing, then aligning the images, and averaging them, might be a way to reduce grain aliasing down to the point where only the true grain pattern is evident.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
There is another interesting and subtle effect that can come in when the noise (such as grain) is not high enough to suppress banding completely but also not low enough to ignore altogether. I suppose one could call this the "tweener region". What can happen in the tweener region is that instead of true bands you can get sort of semi banding, where there are areas where the gradient is flat and noise free (that is apparently grain free) or almost flat and almost noise free, separated by regions where there is a fairly steep gradient with noise (e.g. a grain-like pattern) in the steep gradient regions.

In other words, if there were no grain in the original image you would see a perfect stair step pattern, but if there is a tiny bit of grain in the original image you will see something that looks a little like a stair step pattern but with fuzzy transitions between the steps, and in those steep transition regions is where grain will show up (and I think it may look like some kind of accentuated grain pattern). I think there might be a little bit of this effect showing up in the photo of the skyline that I posted. Look carefully at the edges between the bands to see what I mean. Can you see that the boundaries between the bands are not sharp transitions, but kind of fuzzy/jumpy? you can see this most clearly if you download the image and then zoom in to get a better look at the transition regions between the bands.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
By the way, in the example I discussed above (the tmax scan) I noted that the standard deviation was 1.64 relative units (with 1.00 being the digitization step size) and that a value of this magnitude would be more than enough to suppress banding. Actually even if the standard deviation of the noise (e.g. grain) is as low as 0.33 relative to the digitization step size that would still be enough grain to do a pretty good job of reducing banding to the point where it would probably not be noticeable. That's about five times less grain than in the tmax scan I presented above, so as a practical matter, even a very low grain image is probably going to still have enough grain to reduce banding to a more or less non-noticeable level.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
There seem to be so many "except fors" that it would seem to be better to just scan 16 bits and be done with it.
Nothing wrong with doing that, except for the extra storage space it requires.

It might be a good idea to do some test runs with some negatives that have some large areas with smooth gradients. Scan them twice, once in 8 bit mode and once in 16 bit mode. Do some extreme manipulations to each file, but be sure when you load the 8 bit file to convert it to 16 bit before doing the manipulations. Do the exact same manipulations to each of the two photos. Do they look the same or different after doing the manipulations?
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
65,356 discrete values vs 256 per pixel. alanrockwood's analysis isn't wrong, but resolution also plays a factor. 60 DPI vs 300 DPI will produce very different appearing images.
Can you explain what you mean, preferably with links to examples to look at?
 

wiltw

Subscriber
Joined
Oct 4, 2008
Messages
6,445
Location
SF Bay area
Format
Multi Format
Wouldn't the 16-bit vs 8-bit be totally dependent upon the OUTPUT FILE type selection? Inherently, TIFF = 16-bit, while JPG = 8-bit
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom