Sensor noise in film scanner

Forum statistics

Threads
198,314
Messages
2,772,783
Members
99,593
Latest member
StephenWu
Recent bookmarks
0
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Why do you have to scan at twice the 1560?
It's an empirical finding. However, I will speculate on the reason. If there is more than one mechanism for degradation of resolution then the two mechanisms combine to give a worse result than either of them alone.

In a scanner one mechanism for loss of resolution is the spatial sampling rate (e.g. 2400 samples per inch, 3200 samples per inch, etc.). There are others as well, such as the quality of the lens in the scanner. It seems likely that the quality of the lens in this scanner is one limiting factor, and the spatial sampling rate is another. If you sample at a higher spacial rate (e.g. 3200 samples per inch rather than 2400) then the lens alone becomes the limiting factor rather than the combination of the lens and the sampling rate.

Focus accuracy of the lens in the can be another factor. You can eliminate that factor by finding what film height gives you the sharpest image.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
I have done some simulations of how noise (including grain) affects the possibility of banding. Here are a couple of plots of the results. They take a little explaining. One of the figures shows the result if the noise (as measured by the standard deviation of the signal) is 0.4 of the digitization step size. the other shows the result if the noise is 0.3 of the digitization step size.



This was a Monte Carlo simulation using a random number generator. There are eleven points on each curve. Each point is the average result for one hundred thousand random samples.



If the noise (including grain, sensor noise, etc.) is 0.4 of the digitization step size the result (red curve) is almost indistinguishable from the ideal case (blue line) if there is no noise and an infinitely small analog to digital step size. This means that there will be no possibility of visibly evident banding. The black curve is the result of digitization of a noiseless image. It is of a stairstep form and because of this there is a possibility of visible banding.

Even for the 0.3 case the red curve is quite close to the blue curve. This means that visible banding is unlikely. Because the red and blue curves do not perfectly coincide there is the possibility of a small amount of hue modulation in a gradient if the rgb channels are out of synch with each other. (I won't try to give an extended explanation of this unless someone asks.)

Anyway, as long as the noise (including grain, sensor noise, etc.) is about a third of the ADC step size or greater, then banding will not be evident in an image.

Why does this matter? Some scanners only produce an 8 bit result. Other scanners have the option of producing an 8 or 16 bit result. If the noise is about a third of the digital word step size (e.g. for an 8 bit word) banding will not be evident in a scan, even if extreme manipulations are performed on the image.

I should also note that one would need to take companding into account.

Based on my analysis (not all of which I am presenting here), even a fine grained conventional black and white film, such as tmax 100, has enough grain noise that banding will not occur, even if companding is taking into account, and even if the density of the negative is as high as 3.

I actually have results from more simulations I could post, as well as some plots related to noise performance, but these two plots tell most of the essential parts of the story.


noise 0 pont 4.jpg



noise 0 pont 3.jpg
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
I have done some simulations of how noise (including grain) affects the possibility of banding.

I can't follow the results, because there is not enough information describing the inputs. You may want to consider both the effects of gamma encoding, and also the contrast transformation that is inherent in the negative to positive conversion of neg/pos film/paper. For example if you normalise all you input values to a range of 0 to 1, and then raise them to the power of 4, how does that effect the results? Typical grade 2 BW paper has a gamma of around 4.

I will try and find some examples where I simulated scanning using a 8 bit A/D in the next days or so.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
I can't follow the results, because there is not enough information describing the inputs. You may want to consider both the effects of gamma encoding, and also the contrast transformation that is inherent in the negative to positive conversion of neg/pos film/paper. For example if you normalise all you input values to a range of 0 to 1, and then raise them to the power of 4, how does that effect the results? Typical grade 2 BW paper has a gamma of around 4.

I will try and find some examples where I simulated scanning using a 8 bit A/D in the next days or so.

Ted, I am looking forward to your simulations.

Over the next few days I will try to explain my simulations better, if that is of interest.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
...Over the next few days I will try to explain my simulations better, if that is of interest.
In order to avoid one long post let me break the explanation into several posts.

I will start by saying that I break the problem into two parts. One is the effect of how the use of a limited number of bits to express a visual value affects the ability to represent a smooth gradient. The other is how this fits into the issue of non-linear encoding, roughly speaking gamma encoding.

The graphs I posted on October 9 only directly addresses the first of these issues, and it applies to the final representation of the visual values, regardless of whether the data were preceded by some king of non-linear transformation.

Let me explain this a little more fully. It deals with the noise level after any non-linear transformation has taken place but before the final conversion a limited-bit length word, such as 8 bits. Let me give an example. Suppose I sampled a broad swath of an image at 8 different points, further supposing that the brightness of the image is uniform, aside noise. Suppose that the 8 points had brightness values as follows.

100.6906242532
99.396805972893
101.19426770206
101.15985550029
98.891767840316
98.533563693782
100.05704085416
100.07607418688

(For our purposes it doesn't matter if there has been a non-linear transformation applied to these data prior to obtaining the numbers listed above, i.e. by whatever means were used to generate those 8 numbers, those are the ones we are going to process by converting them to a precision digital word.)

The set of 8 numbers listed above has a mean of 100 units and a standard deviation of one unit.

Conversion to a limited word length digital number is essentially the process of rounding the numbers. Let us assume that we will be converting to an 8 bit number and that the maximum value in the system is 255. In this case, after rounding of the data the list above becomes

101
99
101
101
99
99
100
100

This new list also has a mean of 100.

Now suppose I sample 8 different points in a different region of the picture. This other region may have a different brightness from the first region. Suppose the values (before converting to an 8 bit word) are as follows:

100.9406242532
99.646805972893
101.44426770206
101.40985550029
99.141767840316
98.783563693782
100.30704085416
100.32607418688

This set of points has a mean of 100.25 and a standard deviation of 1. If I convert these to a digital 8 bit word they become.

101
100
101
101
99
99
100
100

This list has a mean of 100.125, which is not quite the same as the ideal 100.25 but clearly an improvement over the case where the image was a noiseless. To see what I mean, suppose the image was noiseless and with a value of 10.25:

10.25
10.25
10.25
10.25
10.25
10.25
10.25
10.25

After conversion to an 8 bit word the list would look like this.

10
10
10
10
10
10
10
10

Which has a mean of 10, which is clearly less accurate than the average obtained from the noisy data.

By the way, the mean of 100.125 obtained above for the noisy data does not equal the "expected" value of 100.25, but that is mostly a statistical fluke. I did the same calculation with a million noisy numbers rather than 8 and got a mean value of 100.249702 for the rounded (i.e. data converted to an 8 bit representation) results, which is very close to the expected value of 100.25.

I hope this sheds a little more light on my simulations, though I doubt if answers all questions about it.
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
OK, now let's look at a case where there may be a non-linear transformation applied to the data before it is converted to an 8 bit word.

Suppose we obtained eight data points of the following values:

10189.009624626
9929.4859405993
10290.939449607
10283.95879259
9829.0901305031
9758.1924560435
10061.502444917
10065.321161751

This list has a mean of 10050.9375001 and a standard deviation of 200.33852614, which we will just call 200.

If we digitize this with a long word length conversion we may get a list that looks like this.

10189
9929
10291
10284
9829
9758
10062
10065

This list has a mean of 10050.875 and a standard deviation of 200.45194472. The digitization has not had a significant effect on the quality of the results.

Now suppose we perform a non-linear operation on the first list in this post, taking the square root of it. The new list looks like this:

100.9406242532
99.646805972893
101.44426770206
101.40985550029
99.141767840316
98.783563693782
100.30704085416
100.32607418688

This list happens to have the same values as one of the lists presented in my last posts. Its mean is 100.25 and its standard deviation in 1. If we convert this to an 8 bit digital word we get a list with a mean of 100.125, which isn't quite the expected value of 100.25 but is better than if we started with a noiseless list with a mean of 100.25 and then converted it to an 8 bit word, resulting of a mean of 100 rather than the expected 100.25.

The non-linear transformation I applied above is not gamma encoding, but it's not too far off, and it serves for illustrative purposes.
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
OK, now let's look at a case where there may be a non-linear transformation applied to the data before it is converted to an 8 bit word.

Alan, to simulate the detrimental difference, for gamma encoding, and also any additional contrast adjustment inherent in a negative/positive systems, the adjustment need to done AFTER the conversion to 8bits.

This link has some clear examples of some of the defects of 8bit without any gamma encoding.

http://www.brucelindbloom.com/index.html?ReferenceImages.html

These don't take into account further contrast adjustments required by a negative/positive film system
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Alan, to simulate the detrimental difference, for gamma encoding, and also any additional contrast adjustment inherent in a negative/positive systems, the adjustment need to done AFTER the conversion to 8bits.

This link has some clear examples of some of the defects of 8bit without any gamma encoding.

http://www.brucelindbloom.com/index.html?ReferenceImages.html

These don't take into account further contrast adjustments required by a negative/positive film system

Gamma encoding is a non-linear transformation. From the point of view of signal processing theory a non-linear transformation after conversion to a low bit word (e.g. 8 bits before the transformation) can only result in a loss of information if you want to store the result in a low bit word (e.g. 8 bits). To illustrate, suppose one takes the square root of an 8 bit word. No matter how you slice it you can't save the result into an eight bit word without loss of information. For example, in this case the few values are in the following table.

0 0 0
1 1 1
2 1.4142135623731 1
3 1.7320508075689 2
4 2 2
5 2.2360679774998 2
6 2.4494897427832 2
7 2.6457513110646 3
8 2.8284271247462 3
9 3 3
10 3.1622776601684 3
11 3.3166247903554 3
12 3.4641016151378 3

The first column is the 8 bit number in linear space. The second column is the exact result for the non-linear transformation. The last column is the result rounded to an integer number so it can be stored in a digital word. For 12 different input values there are only three unique digital output values, hence a loss of information.

Here is a table for the last few values

241 15.52417469626 16
242 15.556349186104 16
243 15.58845726812 16
244 15.620499351813 16
245 15.652475842499 16
246 15.684387141358 16
247 15.716233645502 16
248 15.748015748024 16
249 15.779733838059 16
250 15.811388300842 16
251 15.842979517755 16
252 15.874507866388 16
253 15.905973720587 16
254 15.937377450509 16
255 15.968719422671 16

In this case, 15 input values result in only one output value.

Out of 256 input values there are only 16 output values, hence a loss of information.

One of the problems here is that with only 16 output values one does not make effective use of an 8 bit word to store the output values, so let's try rescaling the output so the largest value is 255. This will provide a better fit to an 8 bit output word.

Here's the first few values:

0 0 0
1 1 16
2 1.4142135623731 23
3 1.7320508075689 28
4 2 32
5 2.2360679774998 36
6 2.4494897427832 39
7 2.6457513110646 42
8 2.8284271247462 45
9 3 48
10 3.1622776601684 50
11 3.3166247903554 53
12 3.4641016151378 55

For the first 12 input values there are 12 output values, which is good because there is no ambiguity between input and output, but there are a lot gaps in the output. For example, there is no 52 in the output, so there is a lot of wasted space in the output word. Let's call this effect "inefficient".

It gets worse if we look at the other end of the list:

241 15.52417469626 248
242 15.556349186104 248
243 15.58845726812 249
244 15.620499351813 249
245 15.652475842499 250
246 15.684387141358 250
247 15.716233645502 251
248 15.748015748024 251
249 15.779733838059 252
250 15.811388300842 252
251 15.842979517755 253
252 15.874507866388 253
253 15.905973720587 254
254 15.937377450509 254
255 15.968719422671 255

For 15 input values there are only 12 output values. This is bad because there is no way to look at the output value and infer what the exact input value was. Let's call this effect "ambiguity".

No matter how you slice it, there is no way to apply a significant non-linear transformation to an 8 bit digital word and store the result in an 8 bit output word without some combination of inefficient use of bits and ambiguity.

Therefore, from information point of view there is no advantage in gamma encoding after the signal is converted to an 8 bit word if you want to store store the result in an 8 bit word because you haven't saved any bits, and you have made it impossible to unambiguously restore all of the 8 bit input values. There could be a slight advantage in displaying the result because you could take the companded result and send it directly to a non-linear display unit if one is displaying videos, but with today's computer speeds that's probably a minor advantage, and it's essentially irrelevant for still pictures.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Anyway, what I am presenting isn't actually new. It's well known in signal processing theory that if there is a small amount of noise in a signal prior to digitization you can avoid some of the effects of analog-to-digital step size at the cost of having some noise in the output. In the case of an image you can avoid banding, but there will be some noise in the image. If there is already noise in the image (e.g. film grain) then you can avoid banding without much of a penalty.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Things can get more complicated. A lot of this stuff goes under the topic of "noise shaping", but that's going beyond what we need to know here.
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
From the point of view of signal processing theory a non-linear transformation after conversion to a low bit word (e.g. 8 bits before the transformation) can only result in a loss of information if you want to store the result in a low bit word (e.g. 8 bits).

EXACTLY! This is the point that I have tried to make several times. In order to do the transformation BEFORE then one either requires a non-linear amplifier, which is found in some old drum scanners, or a higher bit depth A/D is required to do the digitisation first before any non-linear transformation, then a non-linear transformation can occur.

Basically an 8 bit A/D on its own is inadequate, you really need at least 12 or 14 bits for film. This is very relevant to RAW scanning where you are in effect doing your own signal processing. Conversely if you are working with an image where most of the heavy lifting in the image processing has already been done, then 8 bit is often adequate.

Hope this make sense.

In addition negative/positive systems have an even greater non-linear transformation than what is required for gamma encoding.
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
I found some images that illustrate what I was talking about. I found them in the thesis I referenced earlier.

https://uwspace.uwaterloo.ca/bitstr...d=1830F1F5528E43CDB940023C1BD664E2?sequence=1

The first image is the original image. The second image is the first image digitized with a simulated analog to digital converter. Only three bits are used in order to emphasize the effect. The third image is also digitized with a simulated three bit analog to digital converter, but first some noise is added.
picture 1.jpg
picture 2.jpg
picture 3.jpg


As you can see, there is a terrible banding problem in the second image. The banding problem is gone in the third image. The cost is that there is noise. (Of course, if the noise came from grain in the original image then most of the noise would be there anyway.)

As I mentioned, this was done with a simulated three bit A/D converter, which exaggerates the effects.

Just for fun I am also loading the error pictures. Those are the picture 2 minus picture 1 and picture 3 minus picture 1.
picture 4.jpg
picture 5.jpg


As you can see, the first error picture (the one related to the horrible banding) shows the distinct fingerprints of banding. The other one (the one with some noise added before the A/D conversion took place) shows no trace of banding, just some evenly distributed noise.

Let me emphasize that these pictures show exaggerated effects because only a three bit simulated A/D conversion was done. It would be more subtle at 8 bits.

Also, these images do not deal with the issue of companding (e.g. gamma encoding.) However, if properly treated and considered companding does not alter the basic idea presented here, though there can be some subtle effects.
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
EXACTLY! This is the point that I have tried to make several times. In order to do the transformation BEFORE then one either requires a non-linear amplifier, which is found in some old drum scanners, or a higher bit depth A/D is required to do the digitisation first before any non-linear transformation, then a non-linear transformation can occur.

Basically an 8 bit A/D on its own is inadequate, you really need at least 12 or 14 bits for film. This is very relevant to RAW scanning where you are in effect doing your own signal processing. Conversely if you are working with an image where most of the heavy lifting in the image processing has already been done, then 8 bit is often adequate.

Hope this make sense.

In addition negative/positive systems have an even greater non-linear transformation than what is required for gamma encoding.
 

Ted Baker

Member
Joined
Sep 18, 2017
Messages
236
Location
London
Format
Medium Format
As you can see, there is a terrible banding problem in the second image. The banding problem is gone in the third image. The cost is that there is noise. (Of course, if the noise came from grain in the original image then most of the noise would be there anyway.)

That's a good example! Kodak found the same thing with the cineon system, which was originally designed for Vision 2 film, later they release Vision 3, having a greater Dmax. In order to fit it into the 10bit word size there was thought to risk of these defects, but the grain from the film mitigates these defects.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
I believe we are in agreement then. I would point out one thing however. Some scanners have the internal hardware to scan at at high word length (e.g. 12, 14 or 16 bits) but have the option of saving at eight bits. I believe those scanners always scan at maximum word length internally and then convert to 8 bits (whether with or without gamma encoding), so they satisfy the requirements to minimize banding, provided there is sufficient noise present in the image. The alternative would be for the manufacturers to include two A/D converters (a high word length A/D converter and an 8 bit A/D converter), but that would add expense with no benefit.

One more point, somewhat repeating what I have posted before. The noise can come from several sources, including film grain, electronic noise in the detector and amplifiers, shot noise (from the quantized nature of light), and possibly even dithering added digitally before the conversion to 8 bits is done if the scanner is equipped to do dithering. The dithering could be done either by injecting some analog noise prior to A/D conversion or by injecting digital noise prior to conversion to 8 bits. If the dithering is done using a deterministic noise source (which sounds like an oxymoron, but isn't in this context) then it is possible to partly reverse the added noise after the conversion to 8 bits in a way that minimizes noise in the final result.

(Oops, I somehow missed quoting your post Ted, or rather I quoted it, but accidentally put my reply in a separate post, this one.)
 
Last edited:

jtk

Member
Joined
Nov 8, 2007
Messages
4,943
Location
Albuquerque, New Mexico
Format
35mm
I found a Vuescan users guide at https://www.hamrick.com (specifically, https://www.hamrick.com/vuescan/vuescan.pdf). The copyright is 2017, so it seems to be a fairly new document. I am looking through the document to try to understand the nuances of how they use gamma.

My impression of Vuescan documentation in the past has been that it is rather terse and not always very completely explained, at least when I have tried to use the Help tabs in the program. I don't know if the current version will be any better, but I will see.

IMO Vuescan documentation is exceptionally clear. But then, I actually use it.
 

jtk

Member
Joined
Nov 8, 2007
Messages
4,943
Location
Albuquerque, New Mexico
Format
35mm
I had not considered cleaning the scanner.

As you mentioned, I understand that the Canon is considered an excellent scanner, in the same league as the Nikon image quality-wise, though I have read that it may have just a tiny bit more noise in the shadows of a transparency scan.

Also, the photo you posted is a beautiful one.

Canon is by no means a rival to Nikon (certainly with 35mm). Where specifically did that idea come from?
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
IMO Vuescan documentation is exceptionally clear. But then, I actually use it.
I use Vuescan a lot. I have four fs4000us scanners connected to four different computers and one Epson VS750 connected to another computer. I use Vuescan for all of them, and have scanned hundreds of images, maybe thousands, but I don't find the Vuescan documentation particularly clear.
 
Last edited:
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
Canon is by no means a rival to Nikon (certainly with 35mm). Where specifically did that idea come from?
Multiple reviews of the Canon FS4000us or user comments, e.g.

https://www.rangefinderforum.com/forums/showthread.php?t=86805 ("The scans from the FS4000 were excellent. I don't consider the scans from the 5000ED to be better." It does criticize the slowness of the scanner.)

https://www.filmscanner.info/CanonFS4000US.html (which doesn't actually do a direct comparison to Nikon but does give it high praise for picture quality and criticism for slowness.)

https://www.extremetech.com/electronics/71700-high-quality-film-scanners/7 ("...the quality of the scans was excellent. If you don’t need the speed and FireWire capabilities of the Nikon scanner, the FS4000US is tough to beat.")

and a few others that I can't lay my hands on at the moment.
 

Les Sarile

Member
Joined
Aug 7, 2010
Messages
3,419
Location
Santa Cruz, CA
Format
35mm
The Canon FS4000US was a competitor to the previous generation Nikon 4000 and has similar resolution but that is it. The NIkon 4000 is much faster, Nikon ICE is far better then Canon FARE and it has adapters for 50 mounted slides and whole roll feeders not available in the Canon. The Coolscan 5000 is even faster then the 4000 and ICE was further enhanced. Canon requires film holders while you simply feed strips of film for Nikon is you're not using the whole roll feeder.

Here is an example of the quality of Nikonscan ICE with a very dirty scratched up frame of Kodak 160VC. The top left was my attempt at DSLR scanning of a color negative with color correction in post. I didn't even want to bother with addressing the dust and scratches on that since the Nikonscan does it perfectly.

orig.jpg


Here is an example on Kodachrome. The top left DSLR scanned with a Nikon D800 which is easier with slides but again no dust and scratch removal. Nikonscan ICE works perfectly with all the versions of Kodachrome I have tried.

orig.jpg
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
More on the comparison between Canon fs4000us and Nikon scanner(s).

https://www.photo.net/discuss/threa...yssey-nikon-super-coolscan-ls-9000-ed.495265/ (Based on a side by side comparison of scanned slides: "Recommendations? Well, if you have lots of time, can tackle the interface problems (it has USB1 and SCSI), and are doing only small batches of slides or film at a time, the Canoscan FS4000US is a bargain with high-quality results. Otherwise, while the scans are not noticeably better, the speed and ruggedness of the Super Coolscan LS-9000 ED make it a clear choice for those with thousands of images on slides and film, most particularly since it handles 120 film as well as 135.")
 

jtk

Member
Joined
Nov 8, 2007
Messages
4,943
Location
Albuquerque, New Mexico
Format
35mm
"not noticably better" means lack of concern with dye cloud or silver grain detail.

With a good inkjet printer and good paper those details do show...and they can be important visually (if they're important to the look of silver prints from same neg)

Using a camera to copy film is great if finest detail isnt important but it's certainly not equal to a real scan from a dedicated (not flatbed) film scanner.

Minilabs rarely have good scanners...expecting top work from them is probably unfair.
 
OP
OP

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,184
Format
Multi Format
"not noticably better" means lack of concern with dye cloud or silver grain detail.

I am not actually interested in getting into an argument, but if you go to the link I gave in my last post you will see side by side comparisons between scans from the Canon fs4000us and the Nikon Super Coolscan LS-9000 ED.

In the 100% crop of the scans of the top of the circus tent the Canon was noticeably sharper but also noticeably grainier, possibly as a result of being sharper (i.e. being able to resolve the grain). In the blow up of the tiger's face the two images are about equal in apparent sharpness. It's hard to know if the degree of sharpness in the tiger's face is limited by the image itself or the scanner. The Canon shows a little more grain or noise scanner noise in the tiger image, it's hard to know which. If it's grain then the Canon is showing a slight edge in sharpness. If it's scanner noise then the Nikon is showing a slight edge with regard to noise, especially in the dark regions. The greater degree of grain/noise is showing up especially in the nearly-black regions, but also in the tan and orange regions, so it seems more likely that the greater grain/noise in the Canon scan is largely due to better resolving of the grain, and this is consistent with the first image where the Canon scan was both sharper and grainier. On the other hand, some websites have commented that the Nikon reaches into the shadows better, i.e. with slightly less noise.

On balance it's pretty clear, in that test at least, the quality of the Canon scans is at least equal to the quality of the Nikon scans, depending on what weight one gives to different performance parameters, such as sharpness, noise, infrared cleaning of Kodachrome slides, and color pallet. However, the Nikon scanner does have advantages in other areas, some of which have already been noted, such as scan speed.

There is one place where the Canon has an undeniable advantage, and that is the cost. You can pick up a Canon on ebay for a fraction of the cost of a Nikon scanner. It therefore comes down to a question of what one values more, cost or features (especially scan speed). The answer to that question is going to vary from one person to another.
 
Last edited:

Les Sarile

Member
Joined
Aug 7, 2010
Messages
3,419
Location
Santa Cruz, CA
Format
35mm
There is one place where the Canon has an undeniable advantage, and that is the cost. You can pick up a Canon on ebay for a fraction of the cost of a Nikon scanner. It therefore comes down to a question of what one values more, cost or features (especially scan speed). The answer to that question is going to vary from one person to another.

The FS4000US 4000dpi is the equivalent of the Coolscan's 4000dpi in terms of achieving detail captured on film.

As I pointed out, the FS4000US is not a proper comparison of the Coolscan 9000 as it is a medium format scanner which is one major reason for it's cost. Also, one bad example provided is not indicative of the lot.

It is better to compare the FS4000US to the Coolscan V as they are a little closer in price and application. But even in this comparison, the Coolscan V is still much faster, better ICE, better handling (no film holders required).

I only have the Canon FS2720 but using the same Canoscan software does not provide the consistent results you get from the Coolscan+Nikonscan combination across all films in quality of color and contrast. The post work required can take far more time then the already long scan times. Of course I only managed to scan a few thousand frames with the FS2720 before I finally got the Coolscan 5000 and the difference in workflow is a huge advantage.

orig.jpg
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom