DIY 31 Megapixel Enlarger

Kitahara Jinja

D
Kitahara Jinja

  • 2
  • 0
  • 41
Custom Cab

A
Custom Cab

  • 3
  • 1
  • 56
Table for four.

H
Table for four.

  • 10
  • 0
  • 109
Waiting

A
Waiting

  • 5
  • 0
  • 100

Recent Classifieds

Forum statistics

Threads
197,599
Messages
2,761,693
Members
99,412
Latest member
Old_Tech
Recent bookmarks
2

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF
Yeah; QTR actually doesn't really care what kind of units it's being fed. However, since you're apparently applying the curve in PS or GIMP anyway, you might as well do the whole thing in Excel. The QTR tool seems nice, but it's kind of a black box and I've never had much luck with it; it tends to complain about the data a lot even when there's no clear reason for it. I try to steer clear from it most of the time.

It’s still not clear to me how to go about it in excel, I’m fine with excel itself - but having dried and measured the test wedge with my densitometer - I’m at a loss as to how to translate density/reflectance measurements into curve adjustments. I tried running the densities through qtr to get lab values - but that’s just a translation of the actual measurements - not anything that helps with adjusting the curve in PS.
I guess I’ll just have to scan the wedge and use one of the tools that work off the scanned image (or just measure the densities in PS?)
 

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF
I did graph the densities and corresponding reflectance values in excel for the current curve. I'm going to leave this for now and come back to it once I have my flatbed scanner set up and I can scan and measure the step wedge in PS. It's really not too hard to eyeball it and look for sections where the tone separation is not good and adjust the shape of the curve accordingly. The densities are adjusted for paperbase white and my midtone is currently at 20.4% reflectance, which is just a little lighter than a Kodak grey card iirc - so that looks alright. black/dmax is at 0.83%, and white at 97.72% maybe I could pull the zero point down a tiny bit to get paperbase white on the zero density step.
 

Attachments

  • Screenshot 2025-01-01 at 15.44.10.png
    Screenshot 2025-01-01 at 15.44.10.png
    53.7 KB · Views: 20

koraks

Moderator
Moderator
Joined
Nov 29, 2018
Messages
20,962
Location
Europe
Format
Multi Format
I guess I’ll just have to scan the wedge and use one of the tools that work off the scanned image (or just measure the densities in PS?)

It doesn't really matter what kind of data you start out with; it can be the L-component from Lab* samples, or logD reflected density, whichever you have at hand. You can even start with an RGB reading (or any of its components). It helps if you're comparing apples to apples, at least in order not to get too confused in the process. When doing this manually/with Excel, I tend to work with L from Lab* because I can easily get that on both input and output (through a photospectrometer reading, but a scanned or photographed photo and the sample tool in Photoshop/GIMP would work as well).

When taking the readings in Excel, I normalize them by defining the lightest reading as L=100 and the darkest reading as L=0, then adjust all the values in-between to that scale.
I find it also really helps to make a scatter plot of the data to get a feeling for what the adjustment curve needs to be.

For instance, I did this the other day for a Van Dyke digital negative linearization because a friend needed some inkjet negatives for this process. Here's the data I acquired, which I then inverted and normalized:

L-value of original digital step wedge L-value measured on printed sample Normalized, inverted L-value
100 34.77 0.0
95.8 35.72 1.6
91.3 36.18 2.4
86.7 36.99 3.8
82 36.41 2.8
77.7 35.91 1.9
72.9 37.46 4.5
68.1 35.78 1.7
63.2 38.25 5.9
58.6 39.07 7.3
53.6 39.68 8.3
48.4 42.43 12.9
43.2 44.84 17.0
37.8 48.25 22.8
32.7 52.09 29.3
27.1 57.22 37.9
21.2 65.26 51.5
15.2 75.3 68.5
9.3 84.46 84.0
3.6 91.97 96.7
0 93.94 100.0
These are the plotted data (scatter plot):
1735723783614.png

The blue line is the measured, normalized L-value (Y axis) against the input, original L-value (X axis).
The orange line is a smoothed out, idealized version of this curve that ignores the noise that resulted from real-world imperfections. There's a bit of a fudge-factor going on here, as you can see.
The green line is basically the orange line, but with the X and Y axes swapped. This is the correction curve that I used in GIMP (or Photoshop etc.) Note that this correction curve does the linearization as well as the inversion to a negative in one go.

I then made a test print with the compensation curve, measured and plotted it:
1735723980948.png

As you can see, it's not perfect. At this stage, I could have either modified the adjustment curve, or added a second adjustment curve on top of it to fine-tune the result (this is what Calvin Grier describes in his linearization manual). I admit I did neither and just printed the darn negatives already; these were intended for kallitypes and I know the photographer would be toning his prints, and playing with them in the darkroom anyway, so I figured that trying to linearize the negatives to perfection for my materials (paper, sensitizer, light source) and workflow would not be of much added value. YMMV.

Hope this helps - and note that while the above looks simple enough, in practice, it always confuses the heck out of me whenever I go about re-inventing this particular wheel one more time (which I virtually always end up doing for some reason). There's merit to the collection of linearization tools/processes out there (Precision Digital Negatives, Easy Digital Negatives, Grier's Calibration series, etc.), especially if you want to have a proven, well-documented workflow that you can follow step by step.
 

Glauber

Member
Joined
Dec 3, 2024
Messages
26
Location
Norway
Format
4x5 Format
Yes I configured the RPi 4 for that resolution.

The display-board is actually basically "just" FPGA, I discussed about this with SUMOPAI guy. I was of course thinking myself that this could be fun FPGA project but as the RPi works so well, I don't see any point - maybe waste my time on printing and calibrating the last bits.

The "problem" with this solution is that the prints look absolutely perfect. There is no sign of analog process, if you don't pick up the paper you cannot tell if it is a quality inkjet print or silver gelatin wet print. Only benefit I can see of doing traditional B&W prints that these are lasting longer than inkjets.

Hi Radiant,
Are you using the rpi just to display the image or did you have to program some extra software to be able to expose the image with it? I was considering going the raspberry route but I wonder how easy will be to use it with the available code from avandesande
 
  • Glauber
  • Glauber
  • Deleted
  • Reason: I found out the answer

Glauber

Member
Joined
Dec 3, 2024
Messages
26
Location
Norway
Format
4x5 Format
A quick update, I have been working on the software end of things and have many improvements. I changed the conversion algorithm and it is 100x faster. It can now create images that when displayed in sequence will allow for 10 and 12 bit (~4000) levels of grayscale. The exposure module steps through them automatically. The software works with a USB power strip to control the enlarger light source. The monitor is turned off so you initiate exposure with the space bar.
I have now found a PC for this and will order the screen next week, but I am a bit confused by some things...
The program will use a tif to generate the exposure frames, but will it convert the tif to the correct size automatically, or do I need to use the separate converter program?
The USB power strip you mention, could you point to the one you have used? I am unsure if I am able to switch the AC line on/off with the ones I see here for sale.
Thanks for helping!
 
OP
OP

avandesande

Subscriber
Joined
Sep 7, 2002
Messages
1,343
Location
Albuquerque, NM
Format
Med Format Digital
I have now found a PC for this and will order the screen next week, but I am a bit confused by some things...
The program will use a tif to generate the exposure frames, but will it convert the tif to the correct size automatically, or do I need to use the separate converter program?
The USB power strip you mention, could you point to the one you have used? I am unsure if I am able to switch the AC line on/off with the ones I see here for sale.
Thanks for helping!

The power usb strip is described here https://www.cnx-software.com/2013/01/06/powerusb-computer-controlled-power-strips-review/ . Unfortunately it looks like the site is down for the product. You can use your regular timer for the light on your enlarger, you just put the exposure in loop mode in the software with the approximate exposure time. I can't guarantee it will work with any other power strip.

The converter software is a standalone, you don't use it with the exposure software. The exposure software takes care of everything you just need to make sure it is 16 bit monochrome tiff and that it fits on the LCD panel (7680x4320).
 

Glauber

Member
Joined
Dec 3, 2024
Messages
26
Location
Norway
Format
4x5 Format
The power usb strip is described here https://www.cnx-software.com/2013/01/06/powerusb-computer-controlled-power-strips-review/ . Unfortunately it looks like the site is down for the product. You can use your regular timer for the light on your enlarger, you just put the exposure in loop mode in the software with the approximate exposure time. I can't guarantee it will work with any other power strip.

The converter software is a standalone, you don't use it with the exposure software. The exposure software takes care of everything you just need to make sure it is 16 bit monochrome tiff and that it fits on the LCD panel (7680x4320).

Ah that makes it much clearer. I will try to have a look into the code maybe I can integrate with an arduino or something to use a simple relay instead... I am not used to C# but can try. I have tried the exposure program on my main PC but since I don't have the external monitor now, I guess I'm not having it properly working as it should. The image appears on the behind the main screen, and goes after the iteration through frames is completed, but main screen is always on top the image. I also can't see a huge difference on how pixels are exposed on different tones between frames I thought those would differ quite a lot between them to get the most correct contrast curve, but again may be because the lack of the monochromatic LCD.
Anyway, I will see what I can do to when I get all the parts. Thanks for all info!
 

Glauber

Member
Joined
Dec 3, 2024
Messages
26
Location
Norway
Format
4x5 Format

I guess one of these would be an even easier way to integrate the lamp control than a microcontroller....


And here a library in C#
 
Last edited:

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF

I guess one of these would be an even easier way to integrate the lamp control than a microcontroller....


And here a library in C#

I have just set mine to keep the image displayed for 6000 seconds, and I use my regular enlarger timer to control exposure times. The enlarger timer does take an input from a foot switch, so I guess I could rig something to control the timer via that input - but I find it easier at the moment to just hit expose on my touchscreen, cover the screen, and from there work as though I have a normal negative in the holder.
 
OP
OP

avandesande

Subscriber
Joined
Sep 7, 2002
Messages
1,343
Location
Albuquerque, NM
Format
Med Format Digital
I have just set mine to keep the image displayed for 6000 seconds, and I use my regular enlarger timer to control exposure times. The enlarger timer does take an input from a foot switch, so I guess I could rig something to control the timer via that input - but I find it easier at the moment to just hit expose on my touchscreen, cover the screen, and from there work as though I have a normal negative in the holder.

That is what the loop checkbox is for, it will cycle through the images continuously. You should set the time for the sequence to approximately what your print exposure is, unless you are setting it to 8 bit. Then it doesn't matter.
 

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF
That is what the loop checkbox is for, it will cycle through the images continuously. You should set the time for the sequence to approximately what your print exposure is, unless you are setting it to 8 bit. Then it doesn't matter.

Yeah, I have it on loop - but I wasn’t paying too much attention to the loop time - my exposures are up around 120s, and I think the time was set to maybe 10? I assumed less than the exposure time would work ok?
 

Graham06

Member
Joined
Oct 31, 2006
Messages
115
Format
Medium Format
Another tinkerer has joined the party. I bought an 8k 10.1” display, a dell 7050 sff pc linked earlier in this thread, a power supply upgrade and an nvidia 3050 graphics card. I have a custom screen resolution set from the screenshot linked earlier.

I am ready to plug it in and try it but I am unsure how to set the resolution for the first time. I would like to avoid destroying the screen by driving it incorrectly like @avandesande did the first time. I am guessing the hdmi adapter board was designed to support a limited set of configurations and does not refuse to drive unsupported configurations in a safe way like common consumer electronics does. The monitor that came with the pc is too low spec to use the high custom resolution so I can’t set it and then change to the lcd display (perhaps I can if I can learn how to work blind)

I am leaning towards this:
-attach screen to adapter
-plug adapter in to usb c power
-plug adapter hdmi into second hdmi port
-use the nvidia display settings to configure the second display from the first monitor.

Can anyone on this thread provide tested steps?
Do the above steps look reasonable
?
@avandesande what did you do to destroy your first screen? Did you just plug it into to the intel integrated graphics port?
 

Graham06

Member
Joined
Oct 31, 2006
Messages
115
Format
Medium Format
Another tinkerer has joined the party. I bought an 8k 10.1” display, a dell 7050 sff pc linked earlier in this thread, a power supply upgrade and an nvidia 3050 graphics card. I have a custom screen resolution set from the screenshot linked earlier.

I am ready to plug it in and try it but I am unsure how to set the resolution for the first time. I would like to avoid destroying the screen by driving it incorrectly like @avandesande did the first time. I am guessing the hdmi adapter board was designed to support a limited set of configurations and does not refuse to drive unsupported configurations in a safe way like common consumer electronics does. The monitor that came with the pc is too low spec to use the high custom resolution so I can’t set it and then change to the lcd display (perhaps I can if I can learn how to work blind)

I am leaning towards this:
-attach screen to adapter
-plug adapter in to usb c power
-plug adapter hdmi into second hdmi port
-use the nvidia display settings to configure the second display from the first monitor.

Can anyone on this thread provide tested steps?
Do the above steps look reasonable
?
@avandesande what did you do to destroy your first screen? Did you just plug it into to the intel integrated graphics port?

Maybe I attach the adapter first without the lcd screen plugged in.
 
OP
OP

avandesande

Subscriber
Joined
Sep 7, 2002
Messages
1,343
Location
Albuquerque, NM
Format
Med Format Digital
I've never had any problems with NVIDIA drivers. Just plug everything in and turn your computer on.
 

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF
I think I finally have my curve dialled in for mgfb warmtone :smile:
I ended up using the manual entry option on this tool https://thewetprint.com/curve/ to make a second curve to apply over the top of my eyeballed one - which seems to have worked perfectly :smile:
Will see how the prints look once dry.
 

Graham06

Member
Joined
Oct 31, 2006
Messages
115
Format
Medium Format
I've been making slow progress on this project. Everything has gone smoothly so far.

I managed to get the video driver configured which took a few days 'cos the Sumaopai suggested one was not quite right ( will post mine for reference).

I made a frame for the lcd screen out of layers of mattboard.

I have made a few experimental test prints with and without filters, and have a few using @avandesande's lut.
It looks like I will have problems with getting enough light. printing a 1/4 5x7 at f/8 requires 16s.
Don't have prints looking right yet. First tests are using Ilford MG WT FB semi-matt

My next steps:
  • Scans of a no filter print of @avandesande's step wedge with good blacks.
  • Set up a github project for open source contributions.
  • Make a 3d model for the lcd for a 3d printed version
  • Post my own equipment list, costs, scans and curves
  • Make a uv photopolymer film contact test.
  • Decide on the workflows and features I would like
Workflows:
  • Print purely digital files. e.g. from a digicam: not that interesting for me, but it seem like a good exercise for getting things precise.
  • Print scanned negatives: Will most likely do this most often. Have a few almost unprintable negatives that I have fixed well enough digitally.
    • Scan on Epson V850
    • Invert and correct in Photoshop
    • make a paper print that looks like what I see on the screen
  • Digital dodge and burn: place a negative that needs dodging and burning on the lcd. Use Gimp or photoshop to make grade 1 and 5 dodge and burn masks. Use the computer to keep track of the exposure plan. Problem for me is that the lcd cuts out a lot of light. I might have to investigate buying/making a high power LED head for my Beseler 45MX. My computer also doesn't control the enlarger on/off switch.
  • Simulate analog workflow on computer. For purely digital purposes or as an analog aid
    • Scan negatives using a constant linear profile. I wonder how one does this in a way that can be communicated to/replicated by others? Scan with Stouffer step wedge perhaps.
    • Print step wedge onto paper using each standard workflow: each filter grade. record exposure time. You can print other processes like cyanotype, salt prints or photogravure too.
    • Print toning variations e.g. selenium or farmer's reducer etc.
    • If you are precise, you only have to do each thing once. If we can communicate process precisely, only one person has to do a step once. ( this would be a fun (for the right people!) exercise in itself)
    • Now you build an app that applies these curves to the scanned negative. The app will let you adjust the exposure time and you can see the effect ( I imagine fancier versions that simulate the mottled look of a cyanotype or the look of Fomatone MG glossy vs Ilford MG Satin )
    • The first result is a digital image that is closer to what your photo would look like printed on your favourite paper which is a desirable result in itself ( Inverting scanned negatives is a terribly imprecise, and has taken me years of practice to get at least decent results. The charm of all the wonderful papers is nowhere to be found in this process.)
    • The second result is a plan for how to print your negative to get the look you liked in the app. Every bit of imprecision will add up so it might not be exactly what you expacted.
I ended up using the manual entry option on this tool https://thewetprint.com/curve/ to make a second curve to apply over the top of my eyeballed one - which seems to have worked perfectly :smile:
Could you post your lut file once you have it? That looks like a handy tool. I will probably use it next.

@avandesande are you going to release the source code to your exposure app under an open source license? If I wanted to experiment I would have to start my own app from scratch, and you have already put a lot or work into making a workflow that works well for you. I bought the Dell 7050 SFF pc you linked which came with Windows 10 so I have been able to use the exes you posted and the app mostly works fine
 
Joined
Nov 20, 2020
Messages
93
Location
Western Massachusetts
Format
8x10 Format
Wish I’d found this thread a couple months ago! It would have definitely saved me some time.

I’ve just finished building a digital negative carrier using a the 10.1” 16k lcd (I like big prints… ). Similar to what it sounds like you’ve done, I wrote a program which turns a 16-bit grayscale tiff into a series of 512 3-bit frames to achieve 12-bit tonal depth. The enlarger I’ll be using it with is in storage, so I need to get that out and set up before I can start printing, but the digital stuff is done.

Have any of you successfully run your displays off of a raspberry pi? I’m told it should be possible, but I haven’t been able to get Linux to talk with the display properly
 
OP
OP

avandesande

Subscriber
Joined
Sep 7, 2002
Messages
1,343
Location
Albuquerque, NM
Format
Med Format Digital
Wish I’d found this thread a couple months ago! It would have definitely saved me some time.

I’ve just finished building a digital negative carrier using a the 10.1” 16k lcd (I like big prints… ). Similar to what it sounds like you’ve done, I wrote a program which turns a 16-bit grayscale tiff into a series of 512 3-bit frames to achieve 12-bit tonal depth. The enlarger I’ll be using it with is in storage, so I need to get that out and set up before I can start printing, but the digital stuff is done.

Have any of you successfully run your displays off of a raspberry pi? I’m told it should be possible, but I haven’t been able to get Linux to talk with the display properly
Hi Ethan, here is a link to a raspberry pi image sent for me by the manufacturer that works with the 16k screen.



Congrats on working out the 3 bit conversion. Would you care to share the bit mapping? I spent about a day on it and gave up.

I thought it would be something obvious like

111 111 11 1 111 111 1 11 111 111

but I could never get it to work right. It always had problems on high resolution targets and I couldn't get anything out of it with the pattern generators I use for troubleshooting.

Thanks,
 

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF
I've been making slow progress on this project. Everything has gone smoothly so far.

I managed to get the video driver configured which took a few days 'cos the Sumaopai suggested one was not quite right ( will post mine for reference).

I made a frame for the lcd screen out of layers of mattboard.

I have made a few experimental test prints with and without filters, and have a few using @avandesande's lut.
It looks like I will have problems with getting enough light. printing a 1/4 5x7 at f/8 requires 16s.
Don't have prints looking right yet. First tests are using Ilford MG WT FB semi-matt

My next steps:
  • Scans of a no filter print of @avandesande's step wedge with good blacks.
  • Set up a github project for open source contributions.
  • Make a 3d model for the lcd for a 3d printed version
  • Post my own equipment list, costs, scans and curves
  • Make a uv photopolymer film contact test.
  • Decide on the workflows and features I would like
Workflows:
  • Print purely digital files. e.g. from a digicam: not that interesting for me, but it seem like a good exercise for getting things precise.
  • Print scanned negatives: Will most likely do this most often. Have a few almost unprintable negatives that I have fixed well enough digitally.
    • Scan on Epson V850
    • Invert and correct in Photoshop
    • make a paper print that looks like what I see on the screen
  • Digital dodge and burn: place a negative that needs dodging and burning on the lcd. Use Gimp or photoshop to make grade 1 and 5 dodge and burn masks. Use the computer to keep track of the exposure plan. Problem for me is that the lcd cuts out a lot of light. I might have to investigate buying/making a high power LED head for my Beseler 45MX. My computer also doesn't control the enlarger on/off switch.
  • Simulate analog workflow on computer. For purely digital purposes or as an analog aid
    • Scan negatives using a constant linear profile. I wonder how one does this in a way that can be communicated to/replicated by others? Scan with Stouffer step wedge perhaps.
    • Print step wedge onto paper using each standard workflow: each filter grade. record exposure time. You can print other processes like cyanotype, salt prints or photogravure too.
    • Print toning variations e.g. selenium or farmer's reducer etc.
    • If you are precise, you only have to do each thing once. If we can communicate process precisely, only one person has to do a step once. ( this would be a fun (for the right people!) exercise in itself)
    • Now you build an app that applies these curves to the scanned negative. The app will let you adjust the exposure time and you can see the effect ( I imagine fancier versions that simulate the mottled look of a cyanotype or the look of Fomatone MG glossy vs Ilford MG Satin )
    • The first result is a digital image that is closer to what your photo would look like printed on your favourite paper which is a desirable result in itself ( Inverting scanned negatives is a terribly imprecise, and has taken me years of practice to get at least decent results. The charm of all the wonderful papers is nowhere to be found in this process.)
    • The second result is a plan for how to print your negative to get the look you liked in the app. Every bit of imprecision will add up so it might not be exactly what you expacted.

Could you post your lut file once you have it? That looks like a handy tool. I will probably use it next.

@avandesande are you going to release the source code to your exposure app under an open source license? If I wanted to experiment I would have to start my own app from scratch, and you have already put a lot or work into making a workflow that works well for you. I bought the Dell 7050 SFF pc you linked which came with Windows 10 so I have been able to use the exes you posted and the app mostly works fine

Here’s the LUT. It’s for mgfb classic warmtone glossy, in a Durst L184 condenser enlarger with bare opal bulb. No filtration at all. My 8x10 exposure time is 128s at f11, which is right on the upper end of tolerable. I will likely have to find a brighter light source if I continue, just to have reasonable exposure times.

On Dropbox, because I can’t attach a tif: https://www.dropbox.com/scl/fi/9pf2...ey=557jvdvkioj1yyrepvce0zl8m&st=l2qqxnil&dl=0
 
Last edited:

travelight

Member
Joined
Dec 1, 2024
Messages
43
Location
Australia
Format
Med. Format RF
Sample from the latest LUT, unedited scanned print on Epson 4870. I think I need to review the image file as I may have over brightened the highlights to compensate for the previous LUT. In any case - looks like I get good tones and separation throughout.
 

Attachments

  • stephanie_museerodin001.jpeg
    stephanie_museerodin001.jpeg
    632.9 KB · Views: 33
Joined
Nov 20, 2020
Messages
93
Location
Western Massachusetts
Format
8x10 Format
Hi Ethan, here is a link to a raspberry pi image sent for me by the manufacturer that works with the 16k screen.



Congrats on working out the 3 bit conversion. Would you care to share the bit mapping? I spent about a day on it and gave up.

I thought it would be something obvious like

111 111 11 1 111 111 1 11 111 111

but I could never get it to work right. It always had problems on high resolution targets and I couldn't get anything out of it with the pattern generators I use for troubleshooting.

Thanks,


the bits are mapped so that the first pixel's value is (blue bit 1, green bit 1, red bit 1), the second pixel is (blue bit 2, green bit 2, red bit 2) and so on
 
OP
OP

avandesande

Subscriber
Joined
Sep 7, 2002
Messages
1,343
Location
Albuquerque, NM
Format
Med Format Digital
the bits are mapped so that the first pixel's value is (blue bit 1, green bit 1, red bit 1), the second pixel is (blue bit 2, green bit 2, red bit 2) and so on

Wow, I am writing to the byte stream directly so this is going to require some ugly code. Did you figure out or did you find a reference somewhere? I asked the manufacturer for documentation or SDK and they didn't have anything.

Thanks!
 
Joined
Nov 20, 2020
Messages
93
Location
Western Massachusetts
Format
8x10 Format
Wow, I am writing to the byte stream directly so this is going to require some ugly code. Did you figure out or did you find a reference somewhere? I asked the manufacturer for documentation or SDK and they didn't have anything.

Thanks!
Well, I thought I had figured it out, but I just ran a test and that might not be the correct… some pixels are showing up as the wrong values, so I’ve got to go back into the code and see what’s wrong. My suspicion is that one of the bits isn’t in the right order currently. Once I figure out where I went wrong I’ll let you know

[edit: the way I programmed it the frames are generated and saved as tiffs by a first script before getting displayed, are you continually writing the frames?)
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom