?If I boost the saturation in Gimp, the photo looks well printed.
I don't think so, no. The simple reason is that they're not necessary since the problem of crosstalk between the color layers is avoided in other, simpler ways that are not possible in a recording material.Another question : are there also DIR couplers in printing papers ?
it's understanding how the mask gets automatically (and properly) corrected out in a darkroom situation.
could you post the negative scan of the village scene ?
the SPSE Handbook of Photographic Science and Engineering
As koraks pointed out the main significance of the IS&T has been in the organization itself as well as the many technical conferences it has organized. It was originally known as SPSE, the Society of Photographic Scientists and Engineers. At some point, I'm thinking circa 1980, they changed the name to encompass other forms of imaging, thus the IS&T, the Society for Imaging Science and Technology.I searched a little and I could get a scan of the SPSE Handbook of Photographic Science and Engineering, 1973. Is this the same book as the IS&T you are talking about, but an earlier version?
I agree, but this specific case is an exception. As I indicated before, most of the research underlying the kind of "print through analysis" @laser mentions was published in the IS&T journals. A lot of that work moreover dealt specifically with scanning in the motion picture domain, which was a major effort that Kodak invested significant time and effort into. It's somewhat ironic and disappointing that within the stills domain, it seems we are now in the process of reinventing the wheel. It's not just happening here, either. People esp. in the cultural heritage sector are presently working on this very topic and their approaches range from simplistic (and in some cases doomed to fail as they are based on erroneous assumptions) to comparable to what @AbsurdePhoton is trying to do, but more with an eye on digitization of collections and topics such as degradation of materials. What they seem to have in common is that none of them appears to be making use of the research output that's already there.Regarding the usefulness of past IS&T papers, I'd say that very few people on this site would find much direct use.
Yes, an area where I don't have much background (nor interest, which I guess tends to go hand in hand). In such cases I think that a typical 3-color scanner is perhaps not good enough because the color makeup of "artifacts," or whatever one may call them, are "richer," for lack of a better word. So yes, in the IS&T library of published papers are studies using what they call "multi-spectral imaging." (Roy Berns of RIT is one that I recall being very into this sort of thing.) My superficial understanding of multi-spectral imaging is that it essentially takes more than the typical 3 color images; perhaps 6 or 8, for example. (This could be done, for example, with a monochrome sort of digital camera equipped with a filter turret and a set of carefully selected filters.) So the result would be somewhere between a 3-color scan and a more complete spectrophotometric scan.People esp. in the cultural heritage sector are presently working on this very topic ...
(Roy Berns of RIT is one that I recall being very into this sort of thing.)
I wonder to what extent you must delve into this, really. I agree with @Lachlan Young that to a large extent you could consider the chemical package of the color film as a black box. All you seem to want to achieve is to be able to mathematically model the behavior of the film (again, pretty much as a black box) as it represents an original, real-world scene. This means that you'll naturally bunch together a massive number of parameters into a single algorithm to begin with, or at least that's how it appears to me.But for the dyes and couplers part, for the first time in this project I think I'll have to approximate.
I skipped over this earlier, but the way you describe it here gives me the impression that you may be considering the orange mask as a fixed entity. Evidently, it's not - it's an image wise, variable mask. After all, that's what it's intended for - to compensate for imperfections in two of the three dye images and that compensation must naturally follow the dye density. However, also in this case I really doubt whether you need to take this into account at all, since could very well just assume that the whole film package acts as a simple input/output model with a certain distribution of spectral intensities going in, yielding a certain dye density output. (What we have not touched upon and what you seem not to yet pick up from the scanning discussion I hinted you at is the question what transmission spectra you would be keeping in mind in the first place; perhaps you're thinking of a theoretical/perfect-world/archetypal Red, Green and Blue - which of course in reality does not exist.)densityTotal = densityYellow + densityMagenta + densityCyan + densityMin (the last one is what gives the developed film its orange hue, is this the "mask" you're all talking about ?
That's how I am including the min dye density.
Yes, that's incorrect.So for me yes the orange mask is sort of a fixed density that I combine (min dye density curve), am I wrong about that ?
I have a feeling you're now talking about either dye density curves or spectral sensitivity curves - probably the former? In the more expansive datasheets (e.g. Kodak) we can find a number of curves and it makes sense to make explicit what you're referring to:The peaks are clearly indicated by the dyes middle gray curve.
Given the fact that each film product is unique, I think it's unavoidable that you end up doing a lot of work one way or another. An empirical approach involving test patches may even turn out to be the quick alternative to a much more time-consuming theoretical exercise. On the side of the paper, the good news is that the variety is much less if you focus on what's available on the market right now. Especially since Fuji's papers are basically all the same emulsion, just applied in different thicknesses, which results in fairly minor variations between products.But this would have to be done for each film, and even each paper's dyes curves as well. A lot of work.
So for you, what would be the right way to include the orange mask ? I am a bit lost now.
Given the fact that each film product is unique, I think it's unavoidable that you end up doing a lot of work one way or another. An empirical approach involving test patches may even turn out to be the quick alternative to a much more time-consuming theoretical exercise. On the side of the paper, the good news is that the variety is much less if you focus on what's available on the market right now. Especially since Fuji's papers are basically all the same emulsion, just applied in different thicknesses, which results in fairly minor variations between products.
Having spent some time messing around with mask removal and inversion approaches, you often end up back at where Fuji's Frontier/ Image Intelligence mask/ colour correction routines landed (done well enough for anyone to not screw up the results vs perfect) with their habits of chopping off the ends of the scale to make it easier to correct consistently well without requiring more operator skill/ intervention.
So for you, what would be the right way to include the orange mask ? I am a bit lost now.
On the other hand,. Frontiers have the reputation of blocking shadows very easily, more than any other scan I have used (Epsons included).
Well, nobody said this was going to be easy!I find this all very discouraging... but I want to find out the more I can. Thanks for all this guys.
As always, the question is what the purpose/intent is. It seems that the intent here is to be able to make an 'analog-like' version of a natively digital file. I'd have to see some examples to say anything useful. From a theoretical viewpoint, an inherent problem is that the workflow starts with a digital file that already represents some kind of transfer function of real-world spectral intensity into RGB values (even a RAW camera file is 'biased' so to speak). That's not really a problem if you just want to cook something up that looks a bit like an analog print, but it may be an issue if you're trying to simulate the whole process from a more technical viewpoint.What do you think of it ?
The question is at what point it's 'good enough' for your purpose.I am trying to simulate analog films in a software
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?