Not to keep posting digi vs traditional discussion, but the implications and differences between the two always interest me. I was reading an article on how the ccd translates an image, and came across something I hadn't noticed before. The article described the 'lattice' structure of the ccd, then said the gaps in between the lattice which contained no sensor activity were dead zones -and for those areas, the computer would guess the information that would most likely have been there and fill in the blanks. If that's the case, then a percentage of the ccd image would be somewhat artificial wouldn't it? It's already converting light to electricity which changes things, then it fills in blanks for the information it could not gather. So does this mean film will always be a representation of a scene, and digital will only be a numerical representation of what the scene looked like?