Silverfast 8: 16bit grey scale?

Coffee Shop

Coffee Shop

  • 1
  • 0
  • 271
Lots of Rope

H
Lots of Rope

  • 0
  • 0
  • 358
Where Bach played

D
Where Bach played

  • 4
  • 2
  • 721
Love Shack

Love Shack

  • 3
  • 3
  • 1K
Matthew

A
Matthew

  • 5
  • 3
  • 2K

Forum statistics

Threads
199,810
Messages
2,796,939
Members
100,042
Latest member
wturner9
Recent bookmarks
0

grat

Member
Joined
May 8, 2020
Messages
2,044
Location
Gainesville, FL
Format
Multi Format
Not in a fashion you would find acceptable-- I simply don't have the math or knowledge. However, having experience with "digital" images that goes all the way back to RTTY art (later ASCII art), my experience is that more resolution makes banding less apparent. More bits will also make banding less apparent. A decent dithering algorithm (which grain does a nice job of simulating), can also reduce the effect of banding.

As for my statement that 60 dpi vs 300 dpi will produce different images, well, that's approaching axiomatic in nature-- Most will tell you that (digital) prints need to be at least 300 DPI for a standard viewing distance. You can get away with lower resolution for larger viewing distances, but still-- 60 DPI is pretty weak for current technology, unless you're doing some form of halftone.
 
Joined
Aug 29, 2017
Messages
9,703
Location
New Jersey formerly NYC
Format
Multi Format
What effect does reducing resolution such as when you post on social media have on whether banding becomes more apparent? Does 8 vs 16 to start with matter? When I post, it;s all converted to sRGB for the internet which is I believe 8 bit.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I mentioned earlier that a standard deviation of the noise due to grain of 0.33 is enough to eliminate banding in perceptual terms. (This value of 0.33 is scaled with a digitization step size of 1.) Even a standard deviation of 0.25 will be enough to pretty much eliminate banding as a practical matter. I am posting a figure of a simulation I did of a smooth linear gradient with noise added having a standard deviation of 0.25. I then digitized the smooth gradient producing a stair step pattern, i.e. banding, and the noisy gradient. I smoothed the all of the results for presentation purposes. The smoothing wasn't enough to eliminate the stair step pattern in the digitization of the smooth gradient. It simply rounded the steps slightly near the step transitions. As you can see, in the noisy signal the stair step pattern is gone, leaving a somewhat noisy but other was nearly smooth gradient. Other than the noise, there is just hint that the gradient is not perfectly straight, but because there are no hard transitions and only relatively small changes in the slope, the deviation from a perfectly smooth gradient is probably not enough for anyone to notice.

The smooth noiseless gradient is in red. The stair step pattern resulting from digitization of the smooth gradient is in red, and the digitization of noisy gradient is in blue. (Reminder, I smoothed each curve by the same amount, which is why the noise does not appear as one-bit increments in the noisy curve.) Please note, there are 1000 points for each step in this simulation.
digitization with grain.jpg
 

Alan Johnson

Subscriber
Joined
Nov 16, 2004
Messages
3,309
@alanrockwood, I have been scanning pictorial contrast B/W negatives on microfilm with Silverfast SE8.8 48->24 bit color and editing the files with Photoshop Elements.
Is scanning B/W 48-> 24 bit a pointless ?
I have only ever got banding trying to edit pics with both reflected sunlight on the sea [banding in the sky] and dark areas in the same shot. Thanks.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
@alanrockwood, I have been scanning pictorial contrast B/W negatives on microfilm with Silverfast SE8.8 48->24 bit color and editing the files with Photoshop Elements.
Is scanning B/W 48-> 24 bit a pointless ?
I have only ever got banding trying to edit pics with both reflected sunlight on the sea [banding in the sky] and dark areas in the same shot. Thanks.
I don't know about microfilm. The grain in microfilm is extremely fine. My fear is that it might be too fine to provide effective dithering, in which case banding might be a possibility. It would require testing, but it looks like your results already effectively provide a test, and it seems that banding is possible with that film. However, I am told that photoshop elements only works in 8 bit mode, not 16 bit mode, so you can't convert the files to 16 bit mode to edit them. That provides a confounding issue, so it is still probably up in the air whether 8 bit scanning is sufficient for that film (24 bit color is 8 bit in each channel, as you probably already know.)

Do you have GIMP? It is free and I believe it allows you to edit in 16 bit mode, so that would allow you to test the 16 bit workflow when editing.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I've done a little more theoretical work on this problem, summarized in this graph. It shows how noise (including grain) affects the possibility of banding. This assumes that the image is grainy, and that the other sources of noise of whatever source are lumped together with grain in the scan.

The noise, including grain, is characterized by standard deviation (denoted by sigma in the graph), and it assumes a digitization step size of one. For example, if sigma is 0.1 it means that the standard deviation of the noise is equal to one tenth of the digitization step size.

The values on the x and y axis are averaged values, e.g. averaged over the grain.

As you can see, when sigma = 0.1 the steps are significantly rounded in going from one step to another. Banding will be seen, but the edges between the steps will be softened rather than abrupt.

For sigma = 0.2 the steps are no longer level between step transitions. In other words, the gradient is starting to smooth out everywhere, but the gradient is still a bit wavy.

For sigma = 0.3 the waviness in the gradient has mostly gone away. It seems likely to me that a viewer will see this as a smooth gradient and likely will not notice that it is not perfectly smooth.

For sigma = 0.4 and higher the gradient has become, for all practical purposes, perfectly smooth. there are actually two lines in the plot for sigma = or > 0.4, namely sigma = 0.4 and sigma = 0.5. They overlap on the plot to less than the width of the line.

To put it in a nutshell, by the time sigma = 0.4 relative to the digitization step size (which is only a relatively small amount of grain relative to the digitization step size) there will be no perceptible banding in the image. Thus, if there is grain visible in an 8 bit scan it will be enough to suppress the possibility of banding. To assure that this holds after extreme image manipulation one should be sure to convert an 8 bit scanned image to 16 bit before manipulating the image in photoshop or other image processing software.

By the way, it is my understanding that when converting in the other direction (16 bit to 8 bit) photoshop normally adds a little bit of noise (dither) in order to suppress the possibility of banding resulting from that downshift in the bit depth. However, that doesn't concern us here, since we are discussing a different process.

Also, what I am discussing here are images that start with some noise (such as grain) or in which noise is added during the creation of the digitized version of an image (such as electronic noise in a scanner). It may or may not apply to images created by other means, such as a software-based image creation, or possibly a natural scene acquired using a low-noise digital camera.

Effect of noise including grain on possibility of  banding.jpg
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
As for my statement that 60 dpi vs 300 dpi will produce different images, well, that's approaching axiomatic in nature-- Most will tell you that (digital) prints need to be at least 300 DPI for a standard viewing distance. You can get away with lower resolution for larger viewing distances, but still-- 60 DPI is pretty weak for current technology, unless you're doing some form of halftone.

Most 60-70" 4K HDTVs have a pixel density in the 60-70 ppi range and pretty much nobody says 4K TVs don't look sharp, and that's with HD content so it's actually even less effective PPI than that. True, if you want to get close, you want to get more pixels per inch, but a 65" diagonal picture is way bigger than what most people can realistically print without spending a really large amount of cash to even produce physical outputs that big. Strangely, the resolution you need to make a super sharp 11x14 print that you can hold in your hands is about the same amount of MP that looks great on a big 4K TV.

Don't get me wrong, I'm generally all for more resolution, however, there is definitely a point of diminishing returns when it comes to pixel densities, and more often than not, people get caught up in the "more, bigger, better" without realizing just how few pixels are actually needed.
 

grat

Member
Joined
May 8, 2020
Messages
2,044
Location
Gainesville, FL
Format
Multi Format
Most 60-70" 4K HDTVs have a pixel density in the 60-70 ppi range and pretty much nobody says 4K TVs don't look sharp, and that's with HD content so it's actually even less effective PPI than that. True, if you want to get close, you want to get more pixels per inch, but a 65" diagonal picture is way bigger than what most people can realistically print without spending a really large amount of cash to even produce physical outputs that big. Strangely, the resolution you need to make a super sharp 11x14 print that you can hold in your hands is about the same amount of MP that looks great on a big 4K TV.

Don't get me wrong, I'm generally all for more resolution, however, there is definitely a point of diminishing returns when it comes to pixel densities, and more often than not, people get caught up in the "more, bigger, better" without realizing just how few pixels are actually needed.

Absolutely agree-- but again, with the 4K TV, you're not looking at it that close-- standard viewing distance is supposed to be what, 12 feet?

Also, there's an argument that 4K TV's are useless under 55", because the human eye supposedly can't resolve the difference between 4k and 1080p at 12 feet. And I heard much the same arguments against 1080P units 10 years earlier. :smile:

Subjectively, my 49" 4K TV looks better than my 46" 1080P TV did. Could be it's made out of sugar and my brain can't really tell, though.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Absolutely agree-- but again, with the 4K TV, you're not looking at it that close-- standard viewing distance is supposed to be what, 12 feet?

Also, there's an argument that 4K TV's are useless under 55", because the human eye supposedly can't resolve the difference between 4k and 1080p at 12 feet. And I heard much the same arguments against 1080P units 10 years earlier. :smile:

Subjectively, my 49" 4K TV looks better than my 46" 1080P TV did. Could be it's made out of sugar and my brain can't really tell, though.

I've had a computer hooked up to my 65 inch 4K TV for zoom meetings and such since covid hit and routinely use it as a computer display at the ~5 foot distance and can't see individual pixels. A while back, just for my own edification, I went and did some basic testing where I shot a 45MP photo (effectively 8K resolution in TV terms) and made a 1080p version of the photo, a 4K version of the photo, and then put all three versions into a full screen looping slide show with random sequencing, started the slideshow, and walked away for a few minutes, then came back, sat at the usual sitting distance when using it as a computer display and watched the slide show. I could see a difference when it went from the HD to 4K/8K version, but had to be looking for it, and the difference wasn't nearly as large as I thought it would be. I couldn't see any difference between the switches between the 4K and 8K version (which I sort of expected).

I then took a lower resolution 20MP camera and did the same exercise, and again, could see a difference between the HD and 4K versions, but had to be looking for it.

I then did a variation of that where from the 20MP photo, I made an HD version, then progressively larger versions (adding 480 pixels on the long edge) until I was at the 4K resolution and did a slideshow exercise of that. I had a really hard time even seeing differences between them, much less making out the 4K version. Every once in a while the slideshow would sequence from 1080 to 4K (or vice versa) and I'd register the change, but I really struggled to tell otherwise.

The takeaway? If going from 720 or 1080 to 4K, yes, there's more visible detail, but, in practice, the difference just isn't as large as what most people would think it would be, especially if the 1080p picture is really high fidelity to begin with, AND, the 1080p picture displayed at 4K all by itself with nothing else to compare it to looked very good already. If I didn't know it wasn't 4K, I wouldn't even complain about it. I'd say at common TV sizes and viewing distances, 4K display is definitely in the zone of diminishing returns. I suspect a lot of the 4K hoopla (and now 8K) was actually just acquiring at 4K (or bigger) and ending up with what is effectively a significantly higher fidelity 1080p picture. I think a lot of people would be absolutely stunned and shocked at just how good a maximum fidelity 1080p picture actually looks, even being displayed really big and viewed from 5 or 6 feet away. I also don't think we're acquiring at high enough resolutions to say we're even seeing a maximum fidelity 4K picture yet, so even though I think 4K display is diminishing returns, that may change once acquisition routinely delivers maximum fidelity 4K display.

Relating this to prints, more pixels is generally better, but if your source has more than 8MP, you can effectively make whatever print size you want and have totally acceptable results at reasonable viewing distances.

Getting back to the 8/16 bit discussion, the same concept applies. Capture with as many bits as you can. Saving 8 bits is acceptable if sourced from a higher bit depth and you're not planning to do much post processing. If you are planning to do a lot of post processing, then keep as many bits as you can until you go to output.
 
Joined
Aug 29, 2017
Messages
9,703
Location
New Jersey formerly NYC
Format
Multi Format
I have a 75" 4K TV and normally sit 14' from it. I've made 2k (1920x1080) slide shows vs the same pictures at 4K ( resolution 3840x2160). You really can;t see the difference between the two at 14' but can at about half that distance.

Also, when saving jpegs, use very high-quality saves to reduce artifacts. When I do slide show converted into 4K video, I rez each picture to 4K (3840x2160) and save at the highest quality before integrating into a video. If you go up to the screen, you can see each pixel by pixel elements when at 4K.

If you want to compare 2k vs 4K on your smart TV here are two. Make sure you set the bandwidth to 4K or 2K as applicable before viewing on your TV.
https://www.youtube.com/channel/UCGsByP1B3q1EG68f4Yr2AhQ
4K Regency Muscle and Antique Car Show video
2K Regency Men;s club at Middlesex Fire Academy.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I found a website that demonstrates what I have been talking about in a slightly different way. It relates to how scanning with a low bit depth (e.g. 8 bit) can still be just fine and won't result in banding if there is some noise (including grain) in the image.

Here's the link.

https://homes.psd.uchicago.edu/~ejmartin/pix/20d/tests/noise/noise-p3.html

First a little bit of explanation or background info. In my most recent post I presented the concept in the form of a graph. (For presentation purposes noise is removed in the graphs to show the underlying gradient or stair-step structure, without the noise showing up in the figure.) The link above presents the concept in a more visual form.

The context of the link is slightly different. The author of that web page presents it in the context of how noise, such as sensor noise, affects an image captured by a digital camera, whereas my discussion is presented in terms of how grain affects a scanned image. However, the underlying principles are the same.

Here's a copy of the first image from the link. It is a smooth gradient artificially generated in photoshop in 8 bit mode. Note that the gradient appears smooth and noiseless.

Capture 8 bit smooth gradient.PNG


The histogram of the image is shown n the inset. Note that the histogram is not perfectly smooth. This is probably because photoshop put in a tiny bit of noise when it generated the gradient so that banding won't show up if extreme image manipulation were to be performed later. For our purposes we can ignore that small detail and assume that the histogram is perfectly smooth.

Here's the same image with some noise added. For comparison purposes to my graph the amount of noise would be equivalent to sigma=1.46. I didn't include a trace in my graph for sigma that large. (I only went up to sigma=0.5), so this amount of noise is actually more than I simulated for that graph. However, it's comparable to the sigma that I extracted from a tmax scan discussed in an earlier post.

Capture 8 bit smooth gradient with added noise.PNG



The next image is the same as the last one except that the bit depth has been decreased from 8 bits to 5 bits. This was done by truncation of the lower order 3 bits. This is not quite the same as rounding to the nearest step, but for our purposes we can ignore that subtle issue.

Capture 8 bit smooth gradient with added noise before 5 bit truncation.PNG



The two images are, for all practical purposes, indistinguishable, in accord with the point I have been making that if grain is visible in an 8 bit scan then there is virtually nothing to be gained by going to a 16 bit scan.

The next image is a composite one showing what happens when the step size is increased, i.e. the digitization bit depth is decreased.

gradient256-composite-alt.gif




The top sub-image shows what happens when the smooth noiseless 8 bit gradient is converted to a 5 bit gradient by truncation of the lower three bits. This is similar to what would happen if you digitized a perfect gradient using a 5 bit word. Banding is strong. This is basically a repeat of information shown earlier in this post.

The next sub-image down shows what happens when noise is added before the conversion to a 5 bit word length. This is basically similar to what would happen if the gradient were digitized with a 5 bit word, and it is also essentially a repeat of what was shown earlier in the post.

The next sub-image shows what happens if the perfect gradient is digitized with a 3 bit word. There are fewer bands, but they are more distinct than the bands digitized with a 5 bit word.

The next sub-image shows what happens to the noisy version of the gradient after it is digitized with a 3 bit word. This one would be equivalent to the standard deviation of the noise being equivalent to 0.32 relative to the step size of the 3 bit word. This is pretty close to one of the traces in the figure I posted in my previous post (sigma=0.3). Banding is still pretty much negligible, meaning that it is not evident to the casual observer. It's unlikely that it would even be noticed to a careful observer unless it were compared to a better result in a direct A to B comparison or unless one already know what to look for. A careful comparison reveals that the noise is a little higher in the part of the gradient where step transitions would occur when digitizing the perfect gradient, and noise is a bit lower in between those regions. My impression is that the noise might be a tiny bit higher overall, but I couldn't say for sure.

Finally, we see the results of digitization with a 2 bit word. Banding is is very strong and broad in the second to last sub-image which derived from the noiseless gradient. Banding is also strong in the last sub-image, which was digitized from the noisy gradient. However, the transition region between the bands is softened by fairly steep but noisy and fairly narrow gradient regions. This one is comparable to sigma=0.14, which is intermediate between two of the traces in the figure in my last post.

Anyway, hopefully this give everyone a better intuitive feel for how noise (or grain) can provide a dithering function that can suppress banding. Please not that when scanning film it is not necessary to add noise to accomplish dithering. In most cases the noise will already be there through the combination of film grain, sensor noise, and electronic noise.

The bottom line is that in most cases (practically all cases, except perhaps for rare instances that are not typical, such as perhaps the scanning of images derived from microfilm processed to produce a pictorial quality result) 8 bit scanning will be sufficient to capture all of the significant information present in the image without the risk of banding. There is nothing wrong with scanning in 16 bits, except for doubling the file size, but for practical purposes there is nothing to be gained by scanning in 16 bits either.

By the way, earlier in this thread someone mentioned that a 16 bit scan might not be twice as big as an 8 bit scan if the TIFF files are compressed using lossless compression. I tested this idea by scanning the same image in 8 bit and 16 bit mode with or without compression. (It was in zip format for the compression.) The 8 bit compressed TIFF was, at 12.0 megabytes, a little smaller than the uncompressed TIFF, at 18.7 megabytes. However, the 16 bit scan actually expanded from 37.4 megabytes to 43.2 megabytes upon "compression".

A while back I issued a kind of challenge. I don't mean "challenge" in a confrontational sense, but in the spirit of exploration. The challenge was to follow certain workflow that started with an 8 bit scan to see if banding can be produced. The workflow was this, followed in the correct order. Scan in 8 bit mode. Read into an image processing program, such as photoshop. Convert to a 16 bit image. Perform extreme image manipulation. Look for banding. I am still hoping someone will take up the challenge. The results could be very interesting.
 
Last edited:

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
I have a 75" 4K TV and normally sit 14' from it. I've made 2k (1920x1080) slide shows vs the same pictures at 4K ( resolution 3840x2160). You really can;t see the difference between the two at 14' but can at about half that distance.

Also, when saving jpegs, use very high-quality saves to reduce artifacts. When I do slide show converted into 4K video, I rez each picture to 4K (3840x2160) and save at the highest quality before integrating into a video. If you go up to the screen, you can see each pixel by pixel elements when at 4K.

If you want to compare 2k vs 4K on your smart TV here are two. Make sure you set the bandwidth to 4K or 2K as applicable before viewing on your TV.
https://www.youtube.com/channel/UCGsByP1B3q1EG68f4Yr2AhQ
4K Regency Muscle and Antique Car Show video
2K Regency Men;s club at Middlesex Fire Academy.

Absolutely, the closer you sit, the easier it is to see the difference. My viewing distance is just past the "screen door" effect. If I sit closer, I can see the "screen door", but farther back just makes for a smaller picture. The thing that gets me (and what probably trips a lot of people up), is on paper, the difference between 1080p and 4K is literally 4x the number of pixels. Logically, you'd think the difference would be night and day when looking at it, but again, in practice, at typical viewing distances, it's just not as big of a difference as you'd think, and can be totally fubarred by stuff like what quality you saved the jpeg at, which you noted. There is benefit to having stuff at 4K, or in the 8+MP range, but going really high resolution (30+ MP) just gives you cropping ability, it's not going to significantly increase visible fidelity unless you're output is super large and super high resolution and you're expecting people to look at it up close and personal.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
53,661
Location
Delta, BC Canada
Format
Medium Format
I'd be prepared to guess that 4K displays in the same market segment as their 1080 predecessors probably also offer other advances besides pixel counts - image processing, contrast, better "blacks".
So it would not be surprising if images look a bit better on the 4K version.
When it comes to prints, a 3840 x 2160 file really only reaches 13" x 7.5" if you are aiming for 300 dpi, so that highlights how important the presentation choice is.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
When it comes to prints, a 3840 x 2160 file really only reaches 13" x 7.5" if you are aiming for 300 dpi, so that highlights how important the presentation choice is.

Yes, and at the same time, meh... I can print up to 600ppi on my printer with a maximum of 17x22 paper size, and I almost never print more than 240ppi simply because I can't see the difference without a magnifying glass (and I know what to look for), and even then, I notice the paper texture before the resolution printed onto it, and the difference between 240ppi and 300ppi is nearly nothing when printed on paper. 3840x2160 at 240ppi makes a 9x16 inch print. If you wanted to do 2x3 aspect, that'd be 3840x2560, or a very comfortable 10x15 inch print at 240ppi, and you're still less than 10MP. A nice 12x18 print on 13x19 paper is 4320x2880 at 240ppi, and we're barely over 12MP. If we knew people probably wouldn't get closer than 3-4 feet we could easily drop down to 120ppi and net a 24x36 print from the same 12MP and not know any better. Heck, I've seen people ohh and ahh over prints I made for other people where the image was actually a small 4x6 print that they printed out from their phone of some picture that they took while on vacation somewhere with some crappy digital camera, and they lost the digital version of it, but had the physical print in their family album. They bring it in from the family album, have me scan it in and enlarge it so they can hang it on their wall as a nice big cropped 12x12 or 16x16. At that size, I'd be shocked if there was actually more pixel density than a TV, and yet, it's hanging on their wall being looked at by everyone every time they're in that room and not once does anyone go "I think that could be sharper".

We creators suffer from a sickness where we tend to get caught up in the weeds and miss the forest for the trees when it comes to technical details. I can't tell you how many times I've seen some YouTube person or internet person say some absurdly high resolution is the minimum they have to have because they print their work and anything less just isn't sufficient and just won't work. Really? I'll betcha that person wouldn't be able to tell the difference in a real blind test with actual physical prints, but it is funny to watch them zoom in on their display and exclaim "wow! look how much more detail is there!". Yes, there is, but you just did the digital equivalent of inspecting a print with a really high power magnifying glass. How about you try to see a difference without doing that? Sigh.... Yes, everybody can and should determine their own minimums, just be aware that the bar to get into "acceptable" territory is *way lower* than everybody thinks it is.
 

MattKing

Moderator
Moderator
Joined
Apr 24, 2005
Messages
53,661
Location
Delta, BC Canada
Format
Medium Format
I certainly understand where you are coming from there.
I've made some very satisfying 11.5" x 15.5" prints from jpegs straight out of the processor of my family's M 4/3 DSLR - no resizing, and 300 dpi for the lab (I prefer RA4 prints).
I'd like more than the 16 MP sensor, but only because sometimes I crop, and sometimes I'd prefer the option of a print 15.5" on the short dimension, without having to worry about a potential shortage of pixels.
But I don't worry about it.
It does make optical printing from a film negative seem wonderfully simple though.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
Here's a bit more on the 8 bit/16 bit discussion.

In my scanner I selected a region of almost featureless sky in a negative and scanned it. I say "almost featureless" because there is a slight gradient in the sky. There is also a fair amount of dust on the negative, but that's not important for this demonstration. I scanned in 8 bit storage mode.

I loaded it into photoline for manipulation. First I did an extreme gaussian blur followed by an extreme histogram correction. The extreme blur was to smooth out the grain so that it resembled an 8 bit grainless original, and the extreme histogram was intended to produce banding. Here's the result.

cropped 8 bit sky extreme blur then extreme  manipulation.jpg


As you can see, I succeeded in producing banding, which is what I thought would happen.

Next is the result of loading the same 8 bit file into photoline, converting to 16 bit, and then doing exactly the same thing as I did to the first image. Here's the result.

cropped 8 bit sky converted to 16 bit then extreme blur then extreme  manipulation.jpg


As you can see, there is no banding. This demonstrates that if you have enough grain in the original image (and it doesn't take much grain) then scanning in 8 bit mode will not produce banding, provided that you are careful to convert the 8 b it scan to 16 bits before performing extreme image manipulation.

The next image is the same original 8 bit scan followed by conversion to 16 bits and then the same histogram manipulation as the other images. (There was no smoothing applied to this image.) The intent is to illustrate grain in the image. When I say "extreme histogram manipulation" I mean that I sliced out a 1% section from the histogram (81% to 82%), which I think would be considered extreme manipulation by any reasonable standard. The extreme histogram manipulation was the same for all images in this post. Note the complete absence of even a hint of banding, even under this extreme degree of manipulation.

cropped 8 bit sky converted to 16 bit then extreme manipulation.jpg



So hopefully this settles the issue of whether 8 bit scans are good enough, at least in cases where there grain shows up in the scan.

One issue I haven't addressed in the discussion so far is the issue of dynamic range. This is a tricky one that involves some subtle issues, and it is somewhat interacting with the issue of how many bits are in the digitizer and the presence of noise in the image. However, I'm going to make a rather bold statement. Dynamic range is not highly dependent on the bit depth of the analog to digital converter. In somewhat oversimplified terms, even a one bit A/D converter is adequate to capture the full dynamic range of a negative, provided that the footprint of the pixel window is small enough. This is somewhat oversimplified in the sense that it assumes that the individual silver particles in the negative are small large relative to the pixel window footprint and that a silver particle is 100% opaque. Relaxing these assumptions is possible to some degree without invalidating the concept that a low bit depth digitizer is capable of extreme dynamic range. However, this also depends on the concept that grain or other forms of noise are present in the image and/or sensor and/or the rest of the electronics.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
Dynamic range is not highly dependent on the bit depth of the analog to digital converter

Ahhh… NO. You can capture a little more DR than the bit depth your AD converter is through what is effectively dithering in the source, which you’ve been exploring here in this thread, but, you won’t be capturing 16 stops of Linear DR with an 8bit AD. A 16 bit AD has trouble actually capturing a full 16 stops because of noise, an 8 bit AD with the same 16 stop signal just bottoms out. The way AD converters are designed is from maximum signal down, not the other way around, so if both 8 and 16 bit AD converters are design to handle 1.0-0.0 Volts, at 1 volt, the 8 bit AD will output 255, the 16 bit AD will output 65535, cut the voltage in half, get half the numeric value. Do that more than 8 times and the 8 bit AD just keeps outputting a zero and the 16 bit AD keeps outputting a smaller numeric value until it also bottoms out. Now you can add dithering in the form of analog noise in the source being sampled that moves some of that signal up so that it registers numeric values (kind of like a crude analog tone mapping), but that will only get you so far before the noise overwhelms the signal and all you’re capturing is the noise, thus reducing your total captured DR. A little bit gives you a little extra DR, but it’s self defeating above a certain point.
 

Adrian Bacon

Subscriber
Joined
Oct 18, 2016
Messages
2,086
Location
Petaluma, CA.
Format
Multi Format
without having to worry about a potential shortage of pixels.

once it’s matted and framed and hanging on the wall, I very much doubt you could actually see the difference if your source was 16MP. Once you’re more than a couple feet away, the whole 240-300ppi thing goes right out the window.
 
Joined
Aug 29, 2017
Messages
9,703
Location
New Jersey formerly NYC
Format
Multi Format
Absolutely, the closer you sit, the easier it is to see the difference. My viewing distance is just past the "screen door" effect. If I sit closer, I can see the "screen door", but farther back just makes for a smaller picture. The thing that gets me (and what probably trips a lot of people up), is on paper, the difference between 1080p and 4K is literally 4x the number of pixels. Logically, you'd think the difference would be night and day when looking at it, but again, in practice, at typical viewing distances, it's just not as big of a difference as you'd think, and can be totally fubarred by stuff like what quality you saved the jpeg at, which you noted. There is benefit to having stuff at 4K, or in the 8+MP range, but going really high resolution (30+ MP) just gives you cropping ability, it's not going to significantly increase visible fidelity unless you're output is super large and super high resolution and you're expecting people to look at it up close and personal.
My digital camera shoots stills at 19mb. So when I reduce the image to 4K for slide shows, it allows me to crop around 11mb from the original image file, a very handy feature. Of course, when I throw in video clips, those are at 4K. Interestingly, you can crop 4k video clips as well and not see the difference sitting far back at normal viewing distance.

I tried comparing video clips shot at 1080 (2K) vs 2160 (4K). And like the stills, you really can't notice the difference at 14' but start to see some artifacts at about half that distance or less. Again, bit quantity effects artifacts. So it's better to "film" at 100mps rather than let's say 60mbs. Also, use higher bits depths when producing the video file reduces artifacts as well.

All these things are hardly noticeable unless you are comparing one against the other at the same time on a split-screen. Of course, no one watches shows like that. So whatever the resolution, 2k vs 4k, most viewers quickly ignore seeming differences and just watch the show. It's only us who are pixel peeping.
 
Last edited:

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
Ahhh… NO. You can capture a little more DR than the bit depth your AD converter is through what is effectively dithering in the source, which you’ve been exploring here in this thread, but, you won’t be capturing 16 stops of Linear DR with an 8bit AD. A 16 bit AD has trouble actually capturing a full 16 stops because of noise, an 8 bit AD with the same 16 stop signal just bottoms out. The way AD converters are designed is from maximum signal down, not the other way around, so if both 8 and 16 bit AD converters are design to handle 1.0-0.0 Volts, at 1 volt, the 8 bit AD will output 255, the 16 bit AD will output 65535, cut the voltage in half, get half the numeric value. Do that more than 8 times and the 8 bit AD just keeps outputting a zero and the 16 bit AD keeps outputting a smaller numeric value until it also bottoms out. Now you can add dithering in the form of analog noise in the source being sampled that moves some of that signal up so that it registers numeric values (kind of like a crude analog tone mapping), but that will only get you so far before the noise overwhelms the signal and all you’re capturing is the noise, thus reducing your total captured DR. A little bit gives you a little extra DR, but it’s self defeating above a certain point.

Implicit in what I wrote is that the number of bits sufficient to adequately capture a signal depends on the footprint of the capture window. The footprint may be measured in time increments or area increments, depending on the application. For example, the sampling footprint could be area increments if the problem is film scanning. The sampling footprint could be time increments if one is doing audio recording. There is always an increase in noise if the sampling footprint becomes smaller. Therefore, the smaller the sampling footprint is the fewer the bits needed to capture the signal.

I spent a fair amount of my career dealing with instrumentation that acquire signal through detectors that generate pulses, sometimes electron multipliers (to detect ions in a mass spectrometer) and sometimes photomultipliers (to detect photons in an optical experiment). From the point of view of signal acquisition there's no difference between electron multipliers and photomultipliers, so let's frame most of the discussion in terms of photomultipliers unless otherwise noted.

When a photon hits a photomultiplier it generates a pulse of electrons at the output end of the photomultiplier. The pulse is typically a few billionths of a second, or even less than a billionth of a second in some devices, and the pulse typically contains something like a million electrons. (It can be more in some devices or less in others.) Those signal levels are low enough that photons can be counted individually, as long as the light flux hitting the detector is less than, let us say, about a hundred million photons per second. If the acquisition footprint is few billionths of a second then all that is needed to acquire all of the data in the signal is one bit. If the intent is to aggregate the data into larger time increments, let us say time increments of ten billionths of a second, then one can sum the individual counts, and in that case a larger word size is needed to hold the summed result. But the word size can still be pretty small if the aggregated time windows are small. This is not a contrived example. The numbers given are roughly to the scale of what one would see in a time of flight mass spectrometer, and some time of flight mass spectrometers acquire signal through a counting mode of data acquisition.

In the case of film scanning, there are actually scanners that use photomultipliers to detect light, such as drum scanners. I don't know if any of them use photon counting to acquire the signal. However, if they are doing pulse counting then at the lowest level they are effectively doing one-bit conversion of the analog signal to a digital form. This would be occurring on a nanosecond time scale. Then they would be aggregating those counts into larger time windows (which would map onto larger spatial windows on the film plane), and for those larger time windows more bits would be needed, and the bigger the aggregated time windows are the more bits are needed to hold the counts. However, at the lowest level, a single bit is all that is needed, or in other words, that's all the dynamic range that is needed.

What about higher bits when scanning film? Suppose, for example, that I wanted to build a scanner with a dynamic range of sixteen million on a linear scale or 7.2 on a log scale? I could do that. It would require a 24 bit A/D converter. Those exist. (Well, maybe I couldn't design it, but a good engineer could do it.) But it would be meaningless because most of the resolution of the A/D converter would wasted by getting highly accurate characterization of the noise. In fact for film scanning a 16 bit A/D converter is wasting bits (i.e. wasting dynamic range) by acquiring really accurate numbers for the noise (i.e. grain and other forms of noise) when that doesn't add anything at all to the pictorial information in the film. In fact, in most cases (I will say almost all cases, and probably all cases of practical interest) 8 bits is good enough to scan a black and white negative because you are already well into the noise level (i.e. grain and other forms of noise) at that point.

I demonstrated this principle already using acquired data, at least in a small way, and I explained the theoretical foundation for this. I did not cover all possible cases. I just showed a couple of them. However, I have yet to see anyone demonstrate that 8 bits are insufficient to acquire an image from a conventional black and white negative if they use the work flow I described. Would someone please do some experiments to try to demonstrate otherwise? Use Tmax 100 film or Acros because those are the films that would be most likely to show that my assertion fails. I am perfectly willing to be proven wrong.

Now, if there is a film that is the next thing to grainless and which has a very high density range (Velvia? or maybe microfilm processed in a pictorial mode?) then all bets are off. I'm not saying that 8 bits would definitely be insufficient, but only that it might not be sufficient. And even in that case it's only going to matter if one plans to do extreme image manipulation after the scan is acquired. Otherwise 8 bits is plenty because that already exceeds the tonal gradation that the eye can detect.

Lest anyone misunderstand my position, I am not saying that there is anything technically wrong with 16 bit scans, that is unless you like to waste hard disk storage space by getting ever finer characterization of the noise in your photos, but in almost all (and possibly all) cases it's not really necessary to use more than 8 bits for storing the raw scans. In fact, 8 bits is enough to store a final post-manipulation image as well because that already exceeds the tonal gradation that the eye can detect. It is only in the intermediate stage of image manipulation that more than 8 bits serves a useful purpose.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
One thing I have not considered in my posts in this thread is the effect of gamma encoding. I understand that scanners gamma encode the data, which is a non-linear process, rather than storing the pixels as a linear function of the raw data as digitized from the sensor. A non-linear function could result in a collapse of the signal into fewer bits in certain parts of the intensity range. I don't know if this actually happens in practice, given the noise profiles. However, if noise (including grain) shows up in the gamma encoded data then I think the analysis I have been giving still applies. In order for gamma encoding to invalidate this analysis it would require that noise that was originally present (which may be several ADC bits wide) to collapse into a distribution that is zero ADC bits wide... in other words a single number rather than a distribution of several numbers. As I mentioned above, I don't know if this happens. This would best be answered by experiments, and so far no one has shown that 8 bits is insufficient, given the workflow I outlined in previous posts.
 
Last edited:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom