Converting a spreadsheet matrix to an image

See-King attention

D
See-King attention

  • 0
  • 0
  • 16
Saturday, in the park

A
Saturday, in the park

  • 0
  • 0
  • 620
Farm to Market 1303

A
Farm to Market 1303

  • 1
  • 0
  • 1K
Sonatas XII-51 (Life)

A
Sonatas XII-51 (Life)

  • 1
  • 2
  • 2K
Lone tree

D
Lone tree

  • 4
  • 0
  • 1K

Recent Classifieds

Forum statistics

Threads
199,745
Messages
2,796,072
Members
100,022
Latest member
vosskyshod
Recent bookmarks
0

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I'm not sure if this is the right place to pose this question, but here goes.

I would like to take a 2D matrix of data (for example, a spreadsheet file) and convert it or import it into a digital image file, such as TIFF or some other bitmap image file format.

Is there a simple program that can do this? I am not interested in writing a program in Java or Python or whatever computer language one might name. I am looking for a canned solution, so I don't have to learn a programming language.

I may need to explain this in a little more detail. I want to treat each element of the matrix (for example, each cell of a spreadsheet file) as if it were a pixel. The horizontal and vertical positions in the matrix (e.g. row and column specifiers in a spreadsheet) would correspond to the pixel position in the image file. The value contained in that matrix position (e.g. that cell in a spreadsheet) would be the pixel value in the image file. Initially I am interested in treating it as a monochrome image.

What I am not interested in is taking a screenshot of the what the spreadsheet file looks like on the screen.

Thanks.
 

Chan Tran

Subscriber
Joined
May 10, 2006
Messages
6,917
Location
Sachse, TX
Format
35mm
So firt you want to have each pixel represented with 3 values like a normal image file or just one value like the raw file? However, even with a simple 1 MP image the spreadsheet would be very large. Excel has maximum 16,884 columns and 1,048,576 rows. Not enough for even a 1MP image.
But again I don't know of an available software. Your matrix is really not 2D either.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
So firt you want to have each pixel represented with 3 values like a normal image file or just one value like the raw file? However, even with a simple 1 MP image the spreadsheet would be very large. Excel has maximum 16,884 columns and 1,048,576 rows. Not enough for even a 1MP image.
But again I don't know of an available software. Your matrix is really not 2D either.

I am mostly interested in very small arrays, initially a 100x100 matrix.

Each element of the matrix would be associated with three numbers, two coordinates and a value. For example, a 3x3 matrix might look something like this (if scaled so the values are between 0 and 1).

0.48 0.51 0.6
0.35 0.47 0.57
0.25 0.30 0.41

The element at position 2,3 would have a value of 0.57, where 2 is the row number and 3 is the column number.

If scaled so the values are between 0 and 100 a similar matrix might look like this.

48 51 6
35 47 57
25 30 41

If scaled so the values are between 0 and 255 a similar matrix might look like this (rounded to integer values)

122 130 153
89 129 145
64 77 105
 
Last edited:

xkaes

Subscriber
Joined
Mar 25, 2006
Messages
4,878
Location
Colorado
Format
Multi Format
If you just want to save a spreadsheet as an image file, much depends on your software. Select SAVE AS or EXPORT. Depending on what you can export it as, you might get what you want or you might need to make an additional conversion with imaging software.

If all else fails, display your spreadsheet on the screen as you want it to appear and perform a "PRINT SCREEN" -- which copies to the clipboard -- and then paste into whatever imaging software you are using.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I solved the problem. I exported the data as a coma delimited text file. Then I imported the file into imagej in text image form. At that point I was able to view the image in imagej. I then saved it as a 8 bit tiff file. I was then able to import that into photoline, which has more viewing options than imagej, particularly the ability to zoom the size of the image. I am attaching the image. It's nothing too exciting. It was basically a test file I generated, a 100x100 array of pixels with random values simulating a very noisy image.

tempdat (1).jpg
 
Last edited:

loccdor

Subscriber
Joined
Jan 12, 2024
Messages
1,749
Location
USA
Format
Multi Format
What's it for? You may have been able to achieve something similar by editing a BMP file in a hex editor. Uncompressed image data is usually just a matrix of values to begin with.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
What's it for? You may have been able to achieve something similar by editing a BMP file in a hex editor. Uncompressed image data is usually just a matrix of values to begin with.

I am doing some experiments in image manipulation that are most easily done in a spreadsheet or similar program.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I mentioned in my last post that I am doing some numerical experiments on images. There is more than one project, but one is to investigate the effect of noise on the digitization of photos with low bit depth ADCs.

I am doing this with artificially generated data, and I am comparing what happens if one digitizes a gradient with the equivalent of an 8 bit ADC, comparing a noiseless image to a noisy image.

I am going to break this into a series of posts in order to limit the number of images in a post. (I find that if there are too many images in a post it becomes overwhelming and sometimes confusing.)

A while back I posted some similar studies on one dimensional data in the form of graphs. However, I think that two dimensional data in the form of images is more intuitively appealing, and possibly more convincing because they are more intuitively informative.

First I will show a smooth gradient. The underlying image starts with a 40-step gradient running from 126 to 128 in equal step sizes (0.05 unit step size) where a value of zero would be maximum black and 255 would be maximum white. The image I am posting in this post is what would happen if I were to digitize the gradient with high bit depth digitizer, then increase the contrast by 10 fold. I kept the center point at 127 units. The high-contrast version of the image runs from 117 to 137 units.

You might notice that the pixels in the upper left and lower left seem anomalous. I set those pixels at 0 and 255 units, and the image is scaled so that 0 is maximum black and 255 is maximum white.

You might have trouble seeing the actual gradient because it is smooth, and even at 10X contrast enhancement the gradient is pretty subtle. Trust me. The gradient is actually there.

This is basically a reference image to which other images can be compared.
 

Attachments

  • smooth gradient 10X contrast in 16 bit.jpg
    smooth gradient 10X contrast in 16 bit.jpg
    3.4 KB · Views: 54
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
In this post I am showing what would happen if you digitize with an 8 bit digitizer and then increase the contrast by 10X. There are two images, one was the digitization of a noiseless gradient running from 126 to 128 and the other is the digitization of the same gradient with a small amount of noise added before the digitization process.

In the physical world the noise would be a combination of film grain and other sources of noise, such as electronic noise in the scanner. One could characterize the grain as a kind of pseudo noise because it is always the same in repeated scans, but in terms of how it affects the digitization of the gradient it acts the same was as if it were non-repeatable noise.

When I said that a small amount of noise was added, I really mean SMALL. In this case the standard deviation of the (Gaussian) noise had a standard deviation of one half unit, which is smaller than the step size of the digitizer. In an image without enhanced contrast you probably would not be able to even see the noise by casual inspection.

Notice the the (contrast enhanced) image of the noiseless gradient has become posterized, but the noisy image is not posterized. There is some noise ("grain") in the image, but there is not even a hint that the image is posterized. It might be a little easier to see these features if you download the images and zoom them to a larger size in your favorite image processing program.

This lack of posterization in the noisy image is not because the noise (grain) is visually distracting us from noticing the posterization but rather the noise (grain) has prevented posterization from occurring.

This may seem like magic, but it is not. It is a well known phenomenon in signal processing theory, which is that by adding a small amount of noise prior to digitization you can eliminate digitization artifacts like posterization.

Also, please note that a 10X contrast enhancement would be considered extreme by almost any standard. I highly doubt that one would add even close to this much contrast when processing an ordinary pictorial photo in some software package like photoshop.
 

Attachments

  • noiseless gradient in 8 bit 10X contrast posterized.jpg
    noiseless gradient in 8 bit 10X contrast posterized.jpg
    3.2 KB · Views: 48
  • noisy gradent in 8 bit 10X contrast not posterized.jpg
    noisy gradent in 8 bit 10X contrast not posterized.jpg
    4.2 KB · Views: 52
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
The next pair of images are like the previous two except that no contrast enhancement has been applied after digitization.

You probably can't even visually see the gradient in the noiseless image. I can't, but trust me, it's there, and it is posterized because of the low bit-depth digitization.

The same goes for the noisy image. I can't see the noise. Can you? Trust me, it's there.

I can't see any posterization either. That's because there is none. The effective dithering from the noise has suppressed the posterization.

If you download these two images and load them into your favorite image processing program you can do a contrast enhancement as see the posterization in the left-hand image and the noise (grain) in the right-hand image, but you will not see any posterization in the right-hand image.
 

Attachments

  • noisy gradent in 8 bit 1X contrast not posterized.jpg
    noisy gradent in 8 bit 1X contrast not posterized.jpg
    3.5 KB · Views: 46
  • noiseless gradient in 8 bit 1X contrast posterized.jpg
    noiseless gradient in 8 bit 1X contrast posterized.jpg
    3.1 KB · Views: 48
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
So what does this all mean in the practical world?

The bottom line is that if you scan at a low bit ADC (such as 8 bit) and are seeing grain and/or other kinds of noise in the scans then you have nothing to fear from posterization in your scans, and scanning at a higher bit depth provides not benefit in this regard.

This is not to say that there is anything wrong with scanning at a higher bit depth (except for file size) but there is nothing to be gained by scanning at a higher bit depth.

But what about dynamic range? The answer is that dynamic range is determined by signal to noise ratio. If the ADC resolution is sufficient to capture the noise in the signal (including both the noise inherent in the signal itself plus any noise present in the electronics) then using a higher bit depth ADC provides no practical benefit.

But what about the signal at maximum and minimum signal level? In a scan this would be at the minimum and maximum density of the negative or slide. If those regions are so dark and so bright that there is no noise (e.g. grain) then there is no noise to provide a dithering function, and you need to use a higher bit depth ADC to capture all of the dynamic range in the signal. However, if noise (including both grain and other sources of noise) is detectable with your ADC at the extremes of the signal (highest and lowest densities of the negative or slide) then a low bit depth ADC does not limit your dynamic range.

Another take-away from this is that it does not take much noise (grain) to provide effective dithering. Even if the noise is as low as one half the step size (defining the noise level as the standard deviation of the noise) that is enough to suppress things like banding and also to preserve the dynamic range of the image.

Who cares about this anyway? Can't one simply scan at the maximum bit depth and not worry about it? Yes, that is true, but that is not always possible. For example, the Leafscan scanner, when running with certain interface/software combinations, can only do 8 bit scans. But that's not a practical limitation if the the grain in the image is detectable in the scans. It would take an unusually fine-grained negative or slide to have grain so low as to be undetectable at 8 bits. I am not even sure if any normal pictorial film stocks are this fine-grained. I would be interested to know if this is the case.

There is also a small disadvantage to scanning at 16 bits compared to 8 bits, and that is file size and memory storage requirements, although these days storage is pretty cheap, so this is not such an important consideration for most people.

It also means that there is no practical difference between higher bit-depth scanners, like 12 bit vs. 16 bit, because I will submit that even 12 bits is plenty of ADC resolution to detect the grain in even the finest-grained negatives. Therefore, 16 bit specifications for ADCs in scanners is more marketing hype than anything when compared to 12 or 14 bit scanners.

As a final comment, I have never seen a case where scanning at more than 8 bits made any practical difference in a pictorial image obtained from film compared to the same pictorial image scanned at a higher bit depth, even if the image has subsequently undergone extreme image manipulation, particularly if the 8 bit scanned image was converted to 16 bits prior to image manipulation. (I am not even sure that conversion to 16 bits prior to image manipulation is strictly necessary, though it might provide some insurance against image degradation.) Can anyone supply an actual real-life example where 8 bit vs. 16 bit scanning made an observable difference in a picture captured on film?
 

Kino

Subscriber
Joined
Jan 20, 2006
Messages
7,830
Location
Orange, Virginia
Format
Multi Format
The earliest attempts at digitizing motion picture film at 8 bit produced "mach banding" on areas of continuous tones. Ironically, the cure found was to sprinkle a bit of noise into the image and the artifact disappeared. This was mainly a problem with CGI or Vector based imaged that were "noise free" and banded when combined with film scans or viewed alone at 8 bit depth.

Film has a natural "noise" in the grain, so you can get away with a lot more in a scan than from a raster or vector file.

That's why 10 bit data was arrived at as a minimum for guaranteed non-banding of continuous tone images; it doesn't have gaps in data that cause the banding.

While kind of apples to oranges, this article describes the issues that can arise in HD video projection and banding.

 

reddesert

Member
Joined
Jul 22, 2019
Messages
2,460
Location
SAZ
Format
Hybrid
This lack of posterization in the noisy image is not because the noise (grain) is visually distracting us from noticing the posterization but rather the noise (grain) has prevented posterization from occurring.

This may seem like magic, but it is not. It is a well known phenomenon in signal processing theory, which is that by adding a small amount of noise prior to digitization you can eliminate digitization artifacts like posterization.

This is basically anti-aliasing, but you are anti-aliasing in the pixel intensity rather than in the spatial pixels.

I am not sure if this is the intent of the experiment, but you are exporting to JPEG which is a lossy compressed format. So the output images represent your intended data convolved with the effects of JPEG compression. That represents what most people do, since most people's end product is a JPEG, but for illustrative purposes maybe you should export to a lossless format like PNG or TIFF.

My opinion, though I haven't done a lot of scanning and image manipulation, is that 8 bit depth is fine if you aren't doing a lot of post processing on the image. If you scan at 8 bit and then do contrast curve adjustment, you can rapidly run into the limits of the quantized data (posterization). Clearly, scanning at 8 bit, converting perfectly to 16 bit, and then contrast-adjusting would have the same problem. However, scanning at 8 bit, converting to 16 bit with a small amount of noise added (anti-aliasing the intensity values), and then adjusting would have less obtrusive quantization.
 

RalphLambrecht

Subscriber
Joined
Sep 19, 2003
Messages
14,695
Location
K,Germany
Format
Medium Format
I'm not sure if this is the right place to pose this question, but here goes.

I would like to take a 2D matrix of data (for example, a spreadsheet file) and convert it or import it into a digital image file, such as TIFF or some other bitmap image file format.

Is there a simple program that can do this? I am not interested in writing a program in Java or Python or whatever computer language one might name. I am looking for a canned solution, so I don't have to learn a programming language.

I may need to explain this in a little more detail. I want to treat each element of the matrix (for example, each cell of a spreadsheet file) as if it were a pixel. The horizontal and vertical positions in the matrix (e.g. row and column specifiers in a spreadsheet) would correspond to the pixel position in the image file. The value contained in that matrix position (e.g. that cell in a spreadsheet) would be the pixel value in the image file. Initially I am interested in treating it as a monochrome image.

What I am not interested in is taking a screenshot of the what the spreadsheet file looks like on the screen.

Thanks.

will a screenshot do?
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
This is basically anti-aliasing, but you are anti-aliasing in the pixel intensity rather than in the spatial pixels.

I am not sure if this is the intent of the experiment, but you are exporting to JPEG which is a lossy compressed format. So the output images represent your intended data convolved with the effects of JPEG compression. That represents what most people do, since most people's end product is a JPEG, but for illustrative purposes maybe you should export to a lossless format like PNG or TIFF.

My opinion, though I haven't done a lot of scanning and image manipulation, is that 8 bit depth is fine if you aren't doing a lot of post processing on the image. If you scan at 8 bit and then do contrast curve adjustment, you can rapidly run into the limits of the quantized data (posterization). Clearly, scanning at 8 bit, converting perfectly to 16 bit, and then contrast-adjusting would have the same problem. However, scanning at 8 bit, converting to 16 bit with a small amount of noise added (anti-aliasing the intensity values), and then adjusting would have less obtrusive quantization.

Thanks for the comments.

Regarding the file format, the original format for the files I generated was tif format, but photrio does not allow uploading photos in tif format, so starting with the tif filres I created jpg files (at maximum quality level) and uploaded those. Before I uploaded the jpg files I checked to be sure that they looked the same as the corresponding tif files, and they were indistinguishable when viewed by eye.

Interestingly, in most cases the original tif files were smaller than the jpg versions of the same images, a rare instance where jpg did not compress the image but rather expanded the files.

I'm not sure you fully understood some of the points I was making. If there is enough grain in the film that it shows up in an 8 bit scan then the grain in the film will prevent banding from occurring in subsequent image processing. The film grain provides an effective dithering function to the process. If an image is a truly continuous tone image then 8 bit scanning can produce banding in subsequent image processing steps. In fact, the continuous tone file I uploaded showed that effect.

I am not presenting this as something new. Dithering is well understood by the signal processing community. However, I think dithering, its relationship to grain and other forms of noise in scans, and the implications for signal capture is not fully understood among amateur photographers, or even many professional photographers, which is one of the reasons I posted the information here.

Concerning aliasing, what I am describing here does not technically correspond to anti-aliasing. Aliasing is a different problem that has to do with frequency shifting if the sampling rate (be it time-sampling or spatial sampling) is not high enough to capture all of the frequency components of the signal. Aliasing is closely related to the process of heterodyning that was used in most AM radio receivers in the sense that both aliasing and heterodyning produce frequency shifts through frequency mixing processes.

Additional note added later: One more thing. The dithering noise must appear in the signal before it is digitized, not after. If it is added after digitization it does nothing to eliminate banding. Film grain is a form of noise that is present in the image before digitization, so it can produce a dithering function.

Attached is a file that shows what the image would look like if noise is added after digitization and then increased in contrast by 10X. The banding is present in the signal. Noise is simply superimposed on the banded signal.
 

Attachments

  • dithering added after digitization.jpg
    dithering added after digitization.jpg
    4.3 KB · Views: 45
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
The earliest attempts at digitizing motion picture film at 8 bit produced "mach banding" on areas of continuous tones. Ironically, the cure found was to sprinkle a bit of noise into the image and the artifact disappeared. This was mainly a problem with CGI or Vector based imaged that were "noise free" and banded when combined with film scans or viewed alone at 8 bit depth.

Film has a natural "noise" in the grain, so you can get away with a lot more in a scan than from a raster or vector file.

That's why 10 bit data was arrived at as a minimum for guaranteed non-banding of continuous tone images; it doesn't have gaps in data that cause the banding.

While kind of apples to oranges, this article describes the issues that can arise in HD video projection and banding.


Thanks for the link to the article. I will read it.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
will a screenshot do?

A screen shot will give an image of what a spreadsheet looks like on a computer display. I don't know of a way to convert a screenshot into a file where the contents of the file are the numerical contents of the cells in the spreadsheet and stored as pixels.

I did find a way to import a text file (which a spreadsheet can export) into an image file using the program imagej. I can describe the process in more detail if you are interested.
 

RalphLambrecht

Subscriber
Joined
Sep 19, 2003
Messages
14,695
Location
K,Germany
Format
Medium Format
A screen shot will give an image of what a spreadsheet looks like on a computer display. I don't know of a way to convert a screenshot into a file where the contents of the file are the numerical contents of the cells in the spreadsheet and stored as pixels.

I did find a way to import a text file (which a spreadsheet can export) into an image file using the program imagej. I can describe the process in more detail if you are interested.

thanks. I got it.
 

loccdor

Subscriber
Joined
Jan 12, 2024
Messages
1,749
Location
USA
Format
Multi Format
As a final comment, I have never seen a case where scanning at more than 8 bits made any practical difference in a pictorial image obtained from film compared to the same pictorial image scanned at a higher bit depth, even if the image has subsequently undergone extreme image manipulation, particularly if the 8 bit scanned image was converted to 16 bits prior to image manipulation. (I am not even sure that conversion to 16 bits prior to image manipulation is strictly necessary, though it might provide some insurance against image degradation.) Can anyone supply an actual real-life example where 8 bit vs. 16 bit scanning made an observable difference in a picture captured on film?

When scanning a low contrast (high dynamic range) film like HP5+, especially if using a lower agitation development and a scene with flat light, which is the most extreme practical example I can think of, the histogram might span only 25% of the available brightness range. Assuming the user wants the end picture to cover 100% of the brightness range, by simply linearly applying black points and white points they are already losing 75% of the bit depth. That's the equivalent of 8-bits going to 6-bits, 12-bits going to 10-bits, or 16-bits going to 14-bits.

And that's only with a linear shift in the brightness range. Many photographers, after doing this first linear change, add an S-shaped contrast curve. You could easily lose another bit or two in depth depending on how mild or extreme you make this curve.

If you start out with an 8-bit image and end up practically with a 5-bit one, you have 32 shades of gray. That is well within the range of the human eye to perceive. If you're arguing that dithering can mitigate that, sure, but dithering is also bringing with it a reduction in detail. Resolution will also matter, as the negative effect of dithering is much more noticeable at the lower resolution level.

8-bits of gray is great when viewing images, but there is a reason people add an extra byte per channel when editing and expansion of a brightness range is going to take place.
 

loccdor

Subscriber
Joined
Jan 12, 2024
Messages
1,749
Location
USA
Format
Multi Format
Also consider this example of dithering. Let's say you have a 1-bit image with just black and white. In order to increase the range of perceptible brightness values while staying 1-bit, you can multiply the length and width of the image in pixels by two. Now you have 4 pixels representing every 1 pixel in the original image. And with each 4-pixel group, you can express 5 values: 0/4 (black), 1/4 (25% gray), 2/4 (50% gray), 3/4 (75% gray), 4/4 (white).

So now you have multiplied the amount of disk space the image takes up by 4 in order to express 5 brightness values instead of 2.

But, if you changed the 1-bit image to 3-bit while keeping the resolution the same, you'd only be multiplying the disk space by 3, and you'd get 8 (2 to the power of 3) brightness values.

5/4 vs 8/3.

So increasing the bit depth of an image is more efficient than increasing the resolution and dithering to compensate.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
When scanning a low contrast (high dynamic range) film like HP5+, especially if using a lower agitation development and a scene with flat light, which is the most extreme practical example I can think of, the histogram might span only 25% of the available brightness range. Assuming the user wants the end picture to cover 100% of the brightness range, by simply linearly applying black points and white points they are already losing 75% of the bit depth. That's the equivalent of 8-bits going to 6-bits, 12-bits going to 10-bits, or 16-bits going to 14-bits.

And that's only with a linear shift in the brightness range. Many photographers, after doing this first linear change, add an S-shaped contrast curve. You could easily lose another bit or two in depth depending on how mild or extreme you make this curve.

If you start out with an 8-bit image and end up practically with a 5-bit one, you have 32 shades of gray. That is well within the range of the human eye to perceive. If you're arguing that dithering can mitigate that, sure, but dithering is also bringing with it a reduction in detail. Resolution will also matter, as the negative effect of dithering is much more noticeable at the lower resolution level.

8-bits of gray is great when viewing images, but there is a reason people add an extra byte per channel when editing and expansion of a brightness range is going to take place.

You raise some good points about what can happen during image processing using different transformations of the image. However, consider also the following points.

To be clear, in my discussion, in all cases when I refer to noise I mean grain plus other sources of noise, such as electronic noise.

The first one is whether noise is present to a significant degree in the post-transformed image. If the answer is yes then the dithering remains effective and it will prevent banding. If the answer is no then the dithering has lost its effectiveness and banding can occur. Are there examples in pictorial photography with scanned images where this is actually observed in practice?

The second is that one can mitigate the effects you describe by converting the 8 bit scanned image to a 16 bit image before any image processing is performed on the image. This will preserve the dithering effect.

One can, of course, question whether it makes sense to scan in 8 bits if one is going to convert to 16 bits before doing image processing. There are several related answers to this question. One is that is that if there is noise in the scanned image (including grain) then it is a waste of storage space to scan in 16 bits because there is nothing to be gained by doing so.

The second is that in some cases the scanner may be limited to 8 bits. I gave an example in an earlier post.

The third is that a more complete description of a workflow could look something like this. Scan in 8 bit then convert to 16 bit for image processing and finally (optionally) convert back to 8 bit for the final image to be printed. Only the initial 8 bit scan and 8 bit final image would be stored permanently. Since the original 8 bit scan would be stored permanently one has the option of recreating the image processing chain later on, or create a new image processing chain later on without loss of quality or wasted storage space. Since the human can't discern intensity variations finer than 8 bits then storing the final image in 8 bit format is all that is needed for storing the final image.

In terms of dithering, I am not necessarily saying that dithering should be added. I am saying that dithering is (usually, or even always) automatically present in a scanned image because of film grain and other forms of noise, so additional dithering is not necessary. There might be exceptions to this. For example, if one is using super-duper-ultra-fine grained film then there might not be enough film grain present in the scanned image to provide a built-in dithering function. Are there any such films in common use. I don't think that even T-max 100, Delta 100, or Acros 100 would qualify because, although they are fine-grained film, there is still some grain evident in the scans.

You also talk about resolution of the scan, and that is a valid point, one which I did not bring up in this thread, though I believe I have discussed that issue in past posts in other threads. To preserve grain in the scan one should always scan at a resolution sufficient to maintain the grain. (I think its always a good idea to scan at the highest resolution that the scanner allows anyway.) Also, there are two issues related to scan resolution. One is the dpi of the scanner and the other is the optical resolution of the scanner. I won't go deeper into the interactions between those two concepts in this post except to say that in principle one or the other could be limiting, and whichever is the limiting factor will determine how much grain is suppressed because of low resolution.

It's also worth keeping in mind that the image in a film is essentially a one bit image. In other words, if you could scan the film at a resolution of a bazillion the scan could be stored in 1-bit words, equivalent to "silver present" and "silver absent". You began discussing this concept in another post, so I know you understand this concept, but I am emphasizing it in case some people might not have thought this issue through. When scanning at lower resolution one is essentially lumping what would be a 1 bit super-duper-ultra-high resolution image into a lower resolution image that takes more bits of digital resolution to record in each pixel. (For the super-scientific types who might read this, I am ignoring the wave nature of light in this discussion.)

Anyway, other than saving storage space I am not necessarily saying that there is something wrong with scanning at 16 bits, but I am saying that there is nothing to be gained by scanning at 16 bits if 8 bits is sufficient to capture the noise level (including film grain) of an image.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
I've done another calculation. This time I added half as much noise as in the other calculations, i.e. the standard deviation of the noise was one quarter of the ADC step size.

There are two images attached. For these I increased the contrast by ten fold. One of them shows the result with added noise before digitization. The other shows what it would look like if I were able to take the after-digitization picture and remove the noise, leaving just a smooth underlying curve. Note: this is what the smooth curve would look like after digitization of the noisy image, not the smooth curve before digitization.

Let's discuss the smooth curve first. Even though contrast was increased by ten fold there is no banding evident in an inspection by eye. The gradient is actually not perfectly linear. It is kind of a bumpy gradient, with some parts of the gradient being steeper than others, but it's not something one can see by visual inspection. At least I can't see it.

What is happening in the smooth curve is that the gradient is steeper in the region where banding could occur if one were to digitize a perfectly noiseless gradient, but we are not seeing distinct steps here, and the changes in gradient are smooth enough and wide enough that we don't see them. In between the steep gradient regions we have shallow gradients, but we don't see them by visual inspection.

Shifting attention to the other image, we don't see steps in the image. We do see a little bit of what I would call "noise banding". What I mean is that in the regions where steps would be if we were to digitize a perfectly noiseless gradient we so more noise in the image than we see in the in-between regions. Whether one would notice this in a digitized photograph is open to question. I suspect it would not be noticed unless attention were called to it.

If my interpretation is correct then as film grain is reduced we might gradually start to see some grain banding, but only in images undergoing extreme manipulation and only in images where the grain is so slight as to be not fully functioning for dithering. Initially we won't see banding in the brightness of the image, but as grain is reduced even further the noise banding regions will become narrower and narrower, and brightness banding will gradually become evident.

I want to emphasize however, that these effects only start showing up when the noise is becoming very small, with a standard deviation of the noise of well under one digitization step size.

Even with standard deviation at a half of an ADC step size nobody is going to notice either brightness banding or noise banding, even in an extremely manipulated image. With noise at one fourth of an ADC step size some people might start noticing a little bit of noise banding in a highly manipulated image, but not brightness banding.
 

Attachments

  • ten X gradient noisy gradient with noise with a signma of one half ADC step size added before ...jpg
    ten X gradient noisy gradient with noise with a signma of one half ADC step size added before ...jpg
    3.2 KB · Views: 42
  • ten X gradient with sigma of one fourth of ADC step size added before digitization.jpg
    ten X gradient with sigma of one fourth of ADC step size added before digitization.jpg
    4.1 KB · Views: 40

loccdor

Subscriber
Joined
Jan 12, 2024
Messages
1,749
Location
USA
Format
Multi Format
For example, if one is using super-duper-ultra-fine grained film then there might not be enough film grain present in the scanned image to provide a built-in dithering function. Are there any such films in common use. I don't think that even T-max 100, Delta 100, or Acros 100 would qualify because, although they are fine-grained film, there is still some grain evident in the scans.

Adox CMS II 20 was the only film I've used where I could not see any grain at all.

You also talk about resolution of the scan, and that is a valid point, one which I did not bring up in this thread, though I believe I have discussed that issue in past posts in other threads. To preserve grain in the scan one should always scan at a resolution sufficient to maintain the grain. (I think its always a good idea to scan at the highest resolution that the scanner allows anyway.)

Well, I know I gain almost nothing from my Epson flatbed if I scan at more than 2400 dpi, and it takes a much longer time, so I would argue that the best resolution to scan at has that practical boundary as well.

Here is something that could be tried to test the limits of what you propose:

1) Print out an image with a very low brightness range. Like a photograph that has faded over time. A subject that would have originally covered 100% of the range but now only covers 10%.
2) Use low contrast grainy film like Delta 3200 or HP5, pull processing, stand development, a low contrast lens, whatever you can do to handicap contrast, to take a picture of that printed image.
3) Scan as 8-bit without any automatic correction of the brightness range.
4) Edit this image (either as 8-bit or 16-bit) and try to stretch it to 100% of the brightness range again. Adjust the contrast curve to look as good as possible to your eye.
5) Repeat steps 3 and 4 with a 16-bit scan.
6) Observe differences in the images, you could overlay them and do a difference/divide/subtraction filter.

I know that I have seen "jaggedness" to the histogram when I start editing from 8-bit images. Have not experimented in enough detail to see whether this is having a practical effect on my perception.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
Adox CMS II 20 was the only film I've used where I could not see any grain at all.



Well, I know I gain almost nothing from my Epson flatbed if I scan at more than 2400 dpi, and it takes a much longer time, so I would argue that the best resolution to scan at has that practical boundary as well.

Here is something that could be tried to test the limits of what you propose:

1) Print out an image with a very low brightness range. Like a photograph that has faded over time. A subject that would have originally covered 100% of the range but now only covers 10%.
2) Use low contrast grainy film like Delta 3200 or HP5, pull processing, stand development, a low contrast lens, whatever you can do to handicap contrast, to take a picture of that printed image.
3) Scan as 8-bit without any automatic correction of the brightness range.
4) Edit this image (either as 8-bit or 16-bit) and try to stretch it to 100% of the brightness range again. Adjust the contrast curve to look as good as possible to your eye.
5) Repeat steps 3 and 4 with a 16-bit scan.
6) Observe differences in the images, you could overlay them and do a difference/divide/subtraction filter.

I know that I have seen "jaggedness" to the histogram when I start editing from 8-bit images. Have not experimented in enough detail to see whether this is having a practical effect on my perception.

Your proposed study sounds interesting.

A parallel study that could test other aspects of this theory would be to use a very fine grained film rather than a course grained film.

One thing to keep in mind when looking at a noisy photo is that the eye is not very sensitive to picking up the jaggedness of a histogram.

I am attaching a couple of simulated photos that illustrate a closely related thing, though not exactly focusing on the jaggedness of the histogram but rather the spacing of the peaks in the histogram.

There are two attached simulated photos at 8 bit resolution. Both are noisy by nearly the same amount as measured by the standard deviation of the noise. One has a histogram of a few widely spaced peaks. The other has a dense, nearly continuous histogram.

I think most viewers would be hard pressed to identify which has the sparsely spaced histogram and which has the nearly continuous histogram.

When I say that they have nearly the same amount of noise, to be more accurate I should say that the noise is within about 5% between the two as measured by the standard deviation of the noise. To the extent that one might be able to tell the difference in the noise between the two pictures by visual inspection at least part of that difference can probably be attributed to the slight difference in the standard deviation of the noise.
 

Attachments

  • noise at 1 then digitized and contrast at 10X.jpg
    noise at 1 then digitized and contrast at 10X.jpg
    12 KB · Views: 45
  • noise at 10 then digitized.jpg
    noise at 10 then digitized.jpg
    11.7 KB · Views: 38
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,189
Format
Multi Format
Also I should mention that when I discuss grain in the scan, the grain does not have to be visible in the raw scan, as long as the noise level is around one ADC step size, or even half of an ADC step size. That is not a difference that one is going to be able to see by visual inspection of a raw scan. At that level of noise it would take significant image manipulation before an observer could see the grain by visual inspection.
 
Last edited:
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom