- Joined
- Oct 11, 2006
- Messages
- 2,189
- Format
- Multi Format
So firt you want to have each pixel represented with 3 values like a normal image file or just one value like the raw file? However, even with a simple 1 MP image the spreadsheet would be very large. Excel has maximum 16,884 columns and 1,048,576 rows. Not enough for even a 1MP image.
But again I don't know of an available software. Your matrix is really not 2D either.
What's it for? You may have been able to achieve something similar by editing a BMP file in a hex editor. Uncompressed image data is usually just a matrix of values to begin with.
This lack of posterization in the noisy image is not because the noise (grain) is visually distracting us from noticing the posterization but rather the noise (grain) has prevented posterization from occurring.
This may seem like magic, but it is not. It is a well known phenomenon in signal processing theory, which is that by adding a small amount of noise prior to digitization you can eliminate digitization artifacts like posterization.
I'm not sure if this is the right place to pose this question, but here goes.
I would like to take a 2D matrix of data (for example, a spreadsheet file) and convert it or import it into a digital image file, such as TIFF or some other bitmap image file format.
Is there a simple program that can do this? I am not interested in writing a program in Java or Python or whatever computer language one might name. I am looking for a canned solution, so I don't have to learn a programming language.
I may need to explain this in a little more detail. I want to treat each element of the matrix (for example, each cell of a spreadsheet file) as if it were a pixel. The horizontal and vertical positions in the matrix (e.g. row and column specifiers in a spreadsheet) would correspond to the pixel position in the image file. The value contained in that matrix position (e.g. that cell in a spreadsheet) would be the pixel value in the image file. Initially I am interested in treating it as a monochrome image.
What I am not interested in is taking a screenshot of the what the spreadsheet file looks like on the screen.
Thanks.
This is basically anti-aliasing, but you are anti-aliasing in the pixel intensity rather than in the spatial pixels.
I am not sure if this is the intent of the experiment, but you are exporting to JPEG which is a lossy compressed format. So the output images represent your intended data convolved with the effects of JPEG compression. That represents what most people do, since most people's end product is a JPEG, but for illustrative purposes maybe you should export to a lossless format like PNG or TIFF.
My opinion, though I haven't done a lot of scanning and image manipulation, is that 8 bit depth is fine if you aren't doing a lot of post processing on the image. If you scan at 8 bit and then do contrast curve adjustment, you can rapidly run into the limits of the quantized data (posterization). Clearly, scanning at 8 bit, converting perfectly to 16 bit, and then contrast-adjusting would have the same problem. However, scanning at 8 bit, converting to 16 bit with a small amount of noise added (anti-aliasing the intensity values), and then adjusting would have less obtrusive quantization.
The earliest attempts at digitizing motion picture film at 8 bit produced "mach banding" on areas of continuous tones. Ironically, the cure found was to sprinkle a bit of noise into the image and the artifact disappeared. This was mainly a problem with CGI or Vector based imaged that were "noise free" and banded when combined with film scans or viewed alone at 8 bit depth.
Film has a natural "noise" in the grain, so you can get away with a lot more in a scan than from a raster or vector file.
That's why 10 bit data was arrived at as a minimum for guaranteed non-banding of continuous tone images; it doesn't have gaps in data that cause the banding.
While kind of apples to oranges, this article describes the issues that can arise in HD video projection and banding.
The Deep Dive on Bit Depth
With the arrival of 4K content featuring high dynamic range (HDR) and wide color gamut (WCG), color bit-depth has become a critically important projector specification. Here's everything you need to know.www.projectorcentral.com
will a screenshot do?
A screen shot will give an image of what a spreadsheet looks like on a computer display. I don't know of a way to convert a screenshot into a file where the contents of the file are the numerical contents of the cells in the spreadsheet and stored as pixels.
I did find a way to import a text file (which a spreadsheet can export) into an image file using the program imagej. I can describe the process in more detail if you are interested.
As a final comment, I have never seen a case where scanning at more than 8 bits made any practical difference in a pictorial image obtained from film compared to the same pictorial image scanned at a higher bit depth, even if the image has subsequently undergone extreme image manipulation, particularly if the 8 bit scanned image was converted to 16 bits prior to image manipulation. (I am not even sure that conversion to 16 bits prior to image manipulation is strictly necessary, though it might provide some insurance against image degradation.) Can anyone supply an actual real-life example where 8 bit vs. 16 bit scanning made an observable difference in a picture captured on film?
When scanning a low contrast (high dynamic range) film like HP5+, especially if using a lower agitation development and a scene with flat light, which is the most extreme practical example I can think of, the histogram might span only 25% of the available brightness range. Assuming the user wants the end picture to cover 100% of the brightness range, by simply linearly applying black points and white points they are already losing 75% of the bit depth. That's the equivalent of 8-bits going to 6-bits, 12-bits going to 10-bits, or 16-bits going to 14-bits.
And that's only with a linear shift in the brightness range. Many photographers, after doing this first linear change, add an S-shaped contrast curve. You could easily lose another bit or two in depth depending on how mild or extreme you make this curve.
If you start out with an 8-bit image and end up practically with a 5-bit one, you have 32 shades of gray. That is well within the range of the human eye to perceive. If you're arguing that dithering can mitigate that, sure, but dithering is also bringing with it a reduction in detail. Resolution will also matter, as the negative effect of dithering is much more noticeable at the lower resolution level.
8-bits of gray is great when viewing images, but there is a reason people add an extra byte per channel when editing and expansion of a brightness range is going to take place.
For example, if one is using super-duper-ultra-fine grained film then there might not be enough film grain present in the scanned image to provide a built-in dithering function. Are there any such films in common use. I don't think that even T-max 100, Delta 100, or Acros 100 would qualify because, although they are fine-grained film, there is still some grain evident in the scans.
You also talk about resolution of the scan, and that is a valid point, one which I did not bring up in this thread, though I believe I have discussed that issue in past posts in other threads. To preserve grain in the scan one should always scan at a resolution sufficient to maintain the grain. (I think its always a good idea to scan at the highest resolution that the scanner allows anyway.)
Adox CMS II 20 was the only film I've used where I could not see any grain at all.
Well, I know I gain almost nothing from my Epson flatbed if I scan at more than 2400 dpi, and it takes a much longer time, so I would argue that the best resolution to scan at has that practical boundary as well.
Here is something that could be tried to test the limits of what you propose:
1) Print out an image with a very low brightness range. Like a photograph that has faded over time. A subject that would have originally covered 100% of the range but now only covers 10%.
2) Use low contrast grainy film like Delta 3200 or HP5, pull processing, stand development, a low contrast lens, whatever you can do to handicap contrast, to take a picture of that printed image.
3) Scan as 8-bit without any automatic correction of the brightness range.
4) Edit this image (either as 8-bit or 16-bit) and try to stretch it to 100% of the brightness range again. Adjust the contrast curve to look as good as possible to your eye.
5) Repeat steps 3 and 4 with a 16-bit scan.
6) Observe differences in the images, you could overlay them and do a difference/divide/subtraction filter.
I know that I have seen "jaggedness" to the histogram when I start editing from 8-bit images. Have not experimented in enough detail to see whether this is having a practical effect on my perception.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?