How many will see it as you intended? ...
i assume that in the analog world similar problems were around regarding reproduction of color pictures
I don't use a good self-calibrating graphics monitor so my pictures will look good on random people's monitors.
To an extent, but of course very different in terms of technical nature of the issues involved. Anyway, this is one reason why color reversal film reigned supreme in the publishing world. It gave an objective baseline reference.
But perhaps we should stick to the topic of digital presentation here since that's the focus of the original question. The ins & outs of analog color reproduction have been discussed extensively on this forum already; there's always room for more, of course, but it's be best to start a new thread about it in an appropriate section.
in percent how big is the color deviation
Insofar as anyone could answer this quantitative question, I'm afraid the answer would still not mean much for how people perceive these differences. The differences between browsers and platforms can be huge or minimal, depending on your frame of reference, personal sensitivity to color, contextual variables (lighting conditions, what you were doing just before looking at any examples) etc. The final conclusion is that some people will care at least at some points, and some won't. The question revolves naturally around the people who do care and the extent to which they can do things to combat the issue. I think that extent is also naturally limited given the fact that you have basically no control over someone else's display rendering.
Not necessarily very difficult provided you have the right equipment. You can arrive at some kind of XYZ or CIELAB measurement for a range of patches, for instance. The problem is that if you then calculate a percentage difference, virtually nobody will actually be able to intuitively interpret that number. Assume you end up with a 2% difference on a* out of Lab for two different screens - what does that mean, visually? Only people who work with these numbers all the time will be able to make sense of such a number, and even they are subject to contextual and subjective factors like anyone else.You could in theory measure people's screens and determine the delta E of their screens versus a calibrated screen. But that is very difficult thing to do.
Not necessarily very difficult provided you have the right equipment. You can arrive at some kind of XYZ or CIELAB measurement for a range of patches, for instance. The problem is that if you then calculate a percentage difference, virtually nobody will actually be able to intuitively interpret that number. Assume you end up with a 2% difference on a* out of Lab for two different screens - what does that mean, visually? Only people who work with these numbers all the time will be able to make sense of such a number, and even they are subject to contextual and subjective factors like anyone else.
The problem is not so much the measurement itself - it's the fact that it doesn't carry any real-world meaning as an indicator of perceptual differences.
Screen brightness setting is always a PIA for us, we never know what our clients are setting up with, we are always at 60% brightness but it seems many are at 100% , which will cause darker
prints if we do not do the test print.
This is like timing/grading motion picture release prints for theatrical showings. Some theaters had Xenon Arc Lamp houses, some ran Tungsten bulbs and even a few Art Houses still scrounged-up vintage Carbon Arc rods for their presentations. Not only the color temp varied, but the screen brightness varied dramatically.
Was/is there Standards; yes. Good luck enforcing them.
Insofar as anyone could answer this quantitative question, I'm afraid the answer would still not mean much for how people perceive these differences. The differences between browsers and platforms can be huge or minimal, depending on your frame of reference, personal sensitivity to color, contextual variables (lighting conditions, what you were doing just before looking at any examples) etc. The final conclusion is that some people will care at least at some points, and some won't. The question revolves naturally around the people who do care and the extent to which they can do things to combat the issue. I think that extent is also naturally limited given the fact that you have basically no control over someone else's display rendering.
I don't think it works that way. Not having a calibrated monitor just means that whatever color and brightness you're seeing is as random as on any other uncalibrated screen. The randomness on your end does not somehow compensate for the randomness on your customers' ends. If that were the case, then everybody would just use an uncalibrated screen because everything would look the same - how simple life would be if that was the case!
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?