Increase Resolution!

$12.66

A
$12.66

  • 6
  • 3
  • 117
A street portrait

A
A street portrait

  • 1
  • 0
  • 148
A street portrait

A
A street portrait

  • 2
  • 2
  • 142
img746.jpg

img746.jpg

  • 6
  • 0
  • 111
No Hall

No Hall

  • 1
  • 8
  • 159

Recent Classifieds

Forum statistics

Threads
198,800
Messages
2,781,062
Members
99,708
Latest member
sdharris
Recent bookmarks
1

RalphLambrecht

Subscriber
Joined
Sep 19, 2003
Messages
14,649
Location
K,Germany
Format
Medium Format
Do you think itwould be possible to take a few images of the same scene with slightly different camera positions,assemble them in PS and thereby increase sensor resolution.I'm thinking of moving the camera just a few microns sideways and up and down.I'm hoping to capture more image pixels that way and possibly improve resolution ala the Giga pixel camera.:wondering:
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
In principle something like that could work, but there are many things to consider.

In some ways it might be better to change the angle of the camera ever so slightly, with the axis of rotation coinciding with the front principal plane of the lens, rather than shifting the camera position.
 

Doyle Thomas

Member
Joined
Oct 28, 2006
Messages
276
Location
VANCOUVER, W
Format
8x10 Format
I don't understand how this would work to increase (apparent) resolution. Are you talking about merging the files as layers and setting the opacity? To make such a small move in camera position would require some precision device attached to the tripod and it seems to me would result in a loss of resolution (i.e. camera shake).
 

L Gebhardt

Member
Joined
Jun 27, 2003
Messages
2,363
Location
NH
Format
Large Format
Any movement of the lens will change the scene. If you can get a shift adapter and just move the camera body you could in theory increase the resolution. That's effectively how Olympus is doing it in the new EM5II, just with the sensor shift being automatic.

Why not use a longer lens and stitch? That works very well if the subject is static. It also effectively gives you a higher resolution lens.
 
Joined
Jul 21, 2003
Messages
583
Location
Philadelphia
Format
8x10 Format
The shift the sensor a few microns is that hasselblad has been doing for a few years.

Dead Link Removed
 

NedL

Subscriber
Joined
Aug 23, 2012
Messages
3,388
Location
Sonoma County, California
Format
Multi Format
Hi Ralph,

I sometimes use a simple webcam to make photographs of the sun with a solar telescope. The images are then stacked in a program like registax or similar.

The main purpose of the stacking is noise reduction. Different parts of the image of the sun's surface are momentarily clear due to constantly changing atmospheric conditions. Stacking allows creation of an image that is clear and sharp all over by averaging a few hundred of the best video frames.

But the program also has a feature to increase resolution, based on this same idea you are thinking of. The sun is moving ever so slightly due to both atmospheric effects and due to imprecise tracking. I think the feature was called "drizzle"... I played with it but was unable to account for the rotation of the image ( my tracking mount is az-el rather than equatorial )

Here's an interesting link I just found too. Apparently the Olympus OM-D E-M5 II has this feature.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
In principle something like that could work, but there are many things to consider.

In some ways it might be better to change the angle of the camera ever so slightly, with the axis of rotation coinciding with the front principal plane of the lens, rather than shifting the camera position.



The principle is sometimes called "Super Resolution". There is software out there that uses this principle, for example Photo-Accute. There are other software packages as well. I also read how to do it in photoshop, but I don't recall the link.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
The principle is sometimes called "Super Resolution". There is software out there that uses this principle, for example Photo-Accute. There are other software packages as well. I also read how to do it in photoshop, but I don't recall the link.

The principle is also sometimes called "Resolution enhancement".
 
OP
OP
RalphLambrecht

RalphLambrecht

Subscriber
Joined
Sep 19, 2003
Messages
14,649
Location
K,Germany
Format
Medium Format
The principle is also sometimes called "Resolution enhancement".
I tried it with my Nikon D800,which I typically use to capture 30MPx and indeed, I got 120 Mix images but ,I did not see any increase in detail or sharpness, just a larger image.
 

Billy Axeman

Member
Joined
Aug 18, 2017
Messages
523
Location
Netherlands
Format
Digital
Pentax has several camera's with this option built in, they call it 'Pixel Shift'.
Their most affordable camera with this feature is the K-70 (APS-C), but the K3-II, KP (APS-C) and K1 (FF) have it too.

Pixel Shift actually doesn't increase your image resolution (the total number of pixels or dpi's) but instead enhances the definition of the photo on a pixel level.

Pentax implements Pixel Shift by shifting the sensor one pixel up and down and combines 4 images in-camera. These are tiny movements indeed, so you need to avoid any movement of the camera by using a stable mount (tripod), a remote cable release, and I also see sharper images when the mirror is tipped up beforehand.

I bought a K-70 to see if I could copy MF-film (B&W) in one go without using stitching software.
I'm still in the process of building the copier hardware (Pentax K-70, Pentax-A 50mm Macro, Hasselblad bellows, custom made film holder), so I don't have final results at this moment, but my initial tests look very promising.

In a first test I saw a clear difference between photo's with Pixel Shift On or Off, with a much better definition of the grain.
You also need a very sharp lens and it must be tested for various diaphragms to see where it has an optimum. It is a combination of many factors to get the best results.

This is not an option for hand-held photography, but it can be used in a studio, for copy-work, and also for astronomy I guess.

Billy.
 

Diapositivo

Subscriber
Joined
Nov 1, 2009
Messages
3,257
Location
Rome, Italy
Format
35mm
Without shifting position, the same technique is applied in two ways.

a) Taking two pictures from the same position (tripod and building), with the same exposure, and combining them with specific software, decreases noise (filters out the random noise) and enhances the quality of the image without resorting to noise reduction software, which is always detrimental of sharpness;
b) Taking two pictures from the same position (tripod and building), with different exposure, and combining the two images with specific software, increases the quality of the image by using the best S/N ratio of the two images. It's a way to increase dynamic range. That's not to be confused with HDR images, which have a distinctive appearance. Images taken with technique b) appear as having normal dynamic range, but better quality in the shadows.

The technique under a) is used in some scanner software, with scanners supporting it (e.g. VueScan and Nikon CoolScan 5000) and is called "multiple pass" or something like that. The film can be scanned 2, 4, 8, 16 times. The theoretical increase in quality from passing from 1 to 2 scans is very high, while from passing from 8 to 16 scans is very low.

The technique under b) is used in some scanner software, with scanners supporting it (e.g. VueScan and Nikon CoolScan 5000) and is normally called "multiexposure". The film is scanned twice with two different lamp intensities and the software combines the two scans, allowing to capture the entire dynamic range of the slide film while having clean shadows.

In my experience, merging two images taken with a digital camera with specific software gives some problems, some visible edge artifacts, visible at pixel peeping that is, probably not in print, but makes the final product unusable for some purposes (such as giving to certain stock agencies).
Multiexposure during scanning works a charm if the scanner supports it adequately (precise positioning). Multiple passes also make sense.
 
Last edited:

etn

Member
Joined
Jan 8, 2015
Messages
1,113
Location
Munich, Germany
Format
Medium Format
Another poor person's method to get a high-pixel image would be to use a longer lens, pan horizontally and vertically and stitch in Photoshop.
Many people do this. Given the time required, I wouldn't call it "poor person's method" though.
One example here:
https://petapixel.com/2015/05/24/36...-mont-blanc-becomes-the-worlds-largest-photo/

Quote: "Post-processing and stitching the 46 terabytes afterwards took 2 months, and the resulting 365-gigapixel photo would be as large as a soccer field if printed out at 300dpi."

Other than for sheer bragging rights, I still have a hard time finding out the purpose of such an exercise... it is a great technical achievement none the less! (and probably the only purpose!)
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
One example here:
https://petapixel.com/2015/05/24/36...-mont-blanc-becomes-the-worlds-largest-photo/

Quote: "Post-processing and stitching the 46 terabytes afterwards took 2 months, and the resulting 365-gigapixel photo would be as large as a soccer field if printed out at 300dpi."

Other than for sheer bragging rights, I still have a hard time finding out the purpose of such an exercise... it is a great technical achievement none the less! (and probably the only purpose!)

It's probably a great technique to use to enhance the resolution of images from spy satellites.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
I tried it with my Nikon D800,which I typically use to capture 30MPx and indeed, I got 120 Mix images but ,I did not see any increase in detail or sharpness, just a larger image.
I am not sure what method Nikon is using, but it is worth noting that just combining multiple images (with the detector shifted slightly between the shots) will not, in and of itself, result in increased resolution, though it will increase the number of pixels in the photo. To increase the resolution requires some kind of sharpening to be done on the image.

Related to this, I am not sure if simple sharpening algorithms, such as unsharp mask, actually increase resolution or just increase contrast at edges within the image to give the sensation of increased sharpness. In principle one could you deconvolution methods to restore part of the high frequency part of the image, and this would give a true increase in resolution. However, I don't know if anyone is doing this. The computational burden might be too great, and you would also need some information on something called the "point spread function" of the system in order to do an accurate deconvolution, and that information might be hard to obtain.
 

Billy Axeman

Member
Joined
Aug 18, 2017
Messages
523
Location
Netherlands
Format
Digital
My impression is that some people are confusing two types of enhancements:

1. Taking images that are shifted a very small amount and then stacking them to enhance the definition and to reduce noise. This is sometimes called 'super resolution' but it actually doesn't increase the resolution (measured in pixels). The practical result is that your image looks sharper because it has more detail.

2. Taking several images on a pano head that are overlapping on the edges (horizontally and/or vertically) and stitching them together to get a larger image. This will indeed increase the resolution.

So, the OP is already mixing up these two techniques in the first post. You don't get an image with a high resolution when you are only slightly shifting them, that is, when resolution is defined as the number of pixels horizontally and vertically, or dpi's on print or screen.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
My impression is that some people are confusing two types of enhancements:

1. Taking images that are shifted a very small amount and then stacking them to enhance the definition and to reduce noise. This is sometimes called 'super resolution' but it actually doesn't increase the resolution (measured in pixels). The practical result is that your image looks sharper because it has more detail.

2. Taking several images on a pano head that are overlapping on the edges (horizontally and/or vertically) and stitching them together to get a larger image. This will indeed increase the resolution.

So, the OP is already mixing up these two techniques in the first post. You don't get an image with a high resolution when you are only slightly shifting them, that is, when resolution is defined as the number of pixels horizontally and vertically, or dpi's on print or screen.

It is true that one can use multiple images combined to enhance an image in several different ways. One is to actually improve the resolution of the image. There is (was) a software package called PhotoAccute that enabled this. Here is a link to a demo page. http://photoacute.com/studio/examples/mac_hdd/index.html. I believe that amateur astronomers also have software to implement this scheme, though I don't know if those packages are useful for pictorial applications.

Here is a tutorial on implementing super resolution using photoshop. https://petapixel.com/2015/02/21/a-...eating-superresolution-photos-with-photoshop/

I am intrigued with the possibility of using super resolution to improve the effective resolution of film scanners. If the film can be shifted slightly between scans it should be possible to form a composite scan with improved resolution. As a practical matter the film shifts would probably be random, so one might expect the possible improvement to be proportional to the square root of the number of scans, and this is probably a best case estimate. In other words, to improve the resolution by 2X would probably require at least four scans with randomized shifts, and my guess is that it would take more like at least 8 scans.

One thing to note is that of one wants to enhance resolution (as opposed to certain other types of image enhancement) these schemes require both interpolating the image onto a larger number of pixels and some kind of sharpening operation as well.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
For the scientifically/mathematically inclined, shifting the sensor a small amount on successive images of the same object is equivalent to increasing the sampling density of the image. For example, in one ideal case, shifting the sensor by a half pixel in the horizontal direction increases the sampling density by 2X, thus raising the Nyquist limit by 2X, enabling a 2X improvement in spatial resolution.

There are other effects that come into play as well. For example, the image sensor is not an array of point sensors but each point is actually a small area sensor. There are other effects as well that would spread a point object onto a smear, and this is why just shifting a sensor is not enough to improve resolution. You also need to remove the smearing effect, and this is why some kind of sharpening must be applied.

Of note: whenever you apply sharpening, ideally by deconvolution, you are also going to enhance the noise in the image, or in other words, there ain't no such thing as a free lunch. In some cases this effect can be dealt with by increasing the number of images being stacked.
 

etn

Member
Joined
Jan 8, 2015
Messages
1,113
Location
Munich, Germany
Format
Medium Format
Alan, thanks for these explanations!
Correct me if i'm wrong, but in my understanding shifting the sensor by 1 pixel in the left-right and up-down directions also alleviates the effect of the Bayer filter array. As a result, each pixel has all R+G+G+B information. Avoiding the otherwise required interpolation results in an increased resolution.

Interesting discussion, even if not film related! :smile:
(this, the very day the "Did you activate Digital" thread comes up, ha ha)
 

Eric Rose

Member
Joined
Nov 21, 2002
Messages
6,842
Location
T3A5V4
Format
Multi Format
Cameras are already doing this internally. The Olympus dslr's specifically. One thing I have done is put my Panasonic on burst mode and taken 4 shots of the same scene while handholding. I merge them together in PS and there is a very small but noticeable increase in resolution. IMHO it's a waste of time as any digital cam over 20M pixels will give you all the resolution you need unless you are producing images for highway billboards.
 

Prof_Pixel

Member
Joined
Feb 17, 2012
Messages
1,917
Location
Penfield, NY
Format
35mm
.... thus raising the Nyquist limit by 2X, enabling a 2X improvement in spatial resolution.
Speaking of the Nyquist limit - most/all digital cameras incorporate a low pass filter that removes high frequency information to avoid Nyquist aliasing effects and thus, throws away the 'very high sharpness' information.
 

alanrockwood

Member
Joined
Oct 11, 2006
Messages
2,185
Format
Multi Format
Alan, thanks for these explanations!
Correct me if i'm wrong, but in my understanding shifting the sensor by 1 pixel in the left-right and up-down directions also alleviates the effect of the Bayer filter array. As a result, each pixel has all R+G+G+B information. Avoiding the otherwise required interpolation results in an increased resolution.

Interesting discussion, even if not film related! :smile:
(this, the very day the "Did you activate Digital" thread comes up, ha ha)
I hadn't thought about the Bayer filter array issue, but I'll bet you are on the right track with that line of thought.
 
Photrio.com contains affiliate links to products. We may receive a commission for purchases made through these links.
To read our full affiliate disclosure statement please click Here.

PHOTRIO PARTNERS EQUALLY FUNDING OUR COMMUNITY:



Ilford ADOX Freestyle Photographic Stearman Press Weldon Color Lab Blue Moon Camera & Machine
Top Bottom