DPChallenge: A Digital Photography Contest You are not logged in. (log in or register
 

DPChallenge Forums >> Tips, Tricks, and Q&A >> i just thought of this!
Pages:  
Showing posts 26 - 33 of 33, (reverse)
AuthorThread
07/15/2005 01:20:25 AM · #26
Originally posted by art-inept:

no no no i mean the picture isn't taken in the same place. you take a picture. move the camera to the left or right or down or up the distance of half a pixel, then take another picture. overlay the pictures and while the edges of the pictures might be soft since it looks like you had a half-pixel blur, the consistent portions of your image will remain unaltered. and then you enlarge the picture so that it's double the size. get it? the reason i say move the camera a half pixel is because when you overlay them in photoshop you'll essentially put twice the amount of data into each pixel. ehh it's okay, it's easier to just buy a 6 mp camera :(


No you still have a 3mp picture. Lets use the Olympus C-300Z since it's a 3mp camera. Mega pixels are the result of the WxH of the photograph not the size of the file. In this cameras case it is 1984 x 1488 = 2,952,192 or 3MP. If you take two pictures and overlay them even offset by a 1/2 pixel and the picture is still 1984 x 1488 you have a 3MP photograph. Now the file size may increase from say 1 mb to 2mb (example). Same as when I take a raw picture vs. Large JPG with my camera. The Large Jpg will be about 6 mbs but the Raw picture will be around 14mb but I still have a 6.3 MP picture.
07/15/2005 11:26:45 AM · #27
Originally posted by art-inept:

wait a second. let's say you had a 3 mp camera. you took a picture. and then magically somehow you could shift your camera to the right one half-pixel in distance. and then you hit the shutter again. then you put both images in photoshop and made them 50% transparent and then layered both of the pictures together. wouldn't you have a barely fuzzy 6 mp image?


First off, we're "bit-twiddling" here. There's a lot of other factors (lens, camera shake, light, skill of photographer, etc) that make much more of a difference. There's another thread //www.dpchallenge.com/forum.php?action=read&FORUM_THREAD_ID=240987
that discusses this.

But since we're bit-twiddling :-)
Imagine you have a pattern of vertical black and white lines spaced so that a 6MP sensor would exactly read them at one black or one white line per pixel. The 3MP pictures would have about 1.4 lines per pixel and would give an patern of alternating dark and light grey lines. One of the 3MP pictures would start "dark, light, dark..." the other would be "light, dark, light...". when you overlay the image at 50% transparancy, you would average them, not better define them, so you'd probably have less contrast.

Now imagine shooting a white square next to a black square. Depending on where the sensor is, one picture may have all white pixels, then a solid black one folowed by solid black pixels. The other 3mp photo would have a 50% grey pixel between the black and white ones. And the 50% overlay would make that worse. Applying unsharp mask may make the picture appear sharper.

So by using two 3mp pictures 1/2 pixel apart you don't get the equivelant of 6mp.

Message edited by author 2005-07-15 11:27:32.
07/15/2005 12:35:26 PM · #28
Originally posted by hankk:

...But since we're bit-twiddling :-)...
Imagine you have a pattern of vertical black and white lines spaced so that a 6MP sensor would exactly read them at one black or one white line per pixel. The 3MP pictures would have about 1.4 lines per pixel and would give an patern of alternating dark and light grey lines. One of the 3MP pictures would start "dark, light, dark..." the other would be "light, dark, light...". when you overlay the image at 50% transparancy, you would average them, not better define them, so you'd probably have less contrast.

Now imagine shooting a white square next to a black square. Depending on where the sensor is, one picture may have all white pixels, then a solid black one folowed by solid black pixels. The other 3mp photo would have a 50% grey pixel between the black and white ones. And the 50% overlay would make that worse. Applying unsharp mask may make the picture appear sharper.

So by using two 3mp pictures 1/2 pixel apart you don't get the equivelant of 6mp.


It's really not that simple. Remember that Green, the color with the highest sampling frequency (twice the number of blue and red) is still only sampled at half the pixel locations. The OP is certainly correct that you could improve resolution by combining multiple exposures. The real thorny problem is trying to move the camera a fraction of a pixel. The solution is to move it randomly, and though you don't get all the benefit, there is a visible difference. Astrophotographers know this, and I've seen it work with terrestrial shots as well. Requires a static subject, of course.
07/15/2005 01:12:55 PM · #29
Originally posted by kirbic:

Originally posted by hankk:

...But since we're bit-twiddling :-)...
Imagine you have a pattern of vertical black and white lines spaced so that a 6MP sensor would exactly read them at one black or one white line per pixel. The 3MP pictures would have about 1.4 lines per pixel and would give an patern of alternating dark and light grey lines. One of the 3MP pictures would start "dark, light, dark..." the other would be "light, dark, light...". when you overlay the image at 50% transparancy, you would average them, not better define them, so you'd probably have less contrast.

Now imagine shooting a white square next to a black square. Depending on where the sensor is, one picture may have all white pixels, then a solid black one folowed by solid black pixels. The other 3mp photo would have a 50% grey pixel between the black and white ones. And the 50% overlay would make that worse. Applying unsharp mask may make the picture appear sharper.

So by using two 3mp pictures 1/2 pixel apart you don't get the equivelant of 6mp.


It's really not that simple. Remember that Green, the color with the highest sampling frequency (twice the number of blue and red) is still only sampled at half the pixel locations. The OP is certainly correct that you could improve resolution by combining multiple exposures. The real thorny problem is trying to move the camera a fraction of a pixel. The solution is to move it randomly, and though you don't get all the benefit, there is a visible difference. Astrophotographers know this, and I've seen it work with terrestrial shots as well. Requires a static subject, of course.


one way you could move it exactly half a pixel would be to place the camera on a very accurate machine like a CNC milling machine?

and what's OP stand for?

Message edited by author 2005-07-15 13:14:31.
07/15/2005 02:36:41 PM · #30
Originally posted by art-inept:


and what's OP stand for?


Opening poster or Original poster, otherwise known as the person that started the thread.
07/15/2005 02:38:46 PM · #31
you are aware how hard that is right?
07/15/2005 02:41:14 PM · #32
Originally posted by faidoi:

Originally posted by art-inept:


and what's OP stand for?


Opening poster or Original poster, otherwise known as the person that started the thread.


o, well, i prefer to be called master or wise one. but i think we can end this thread. unless someone is able to measure the distance of a pixel and then is able to move a camera half that distance we'll never know if you can change a 3 mp camera to a 6, or your 6 mp DSLRs to 12 megapixel monsters

edit: yes gi_joe, i know this is very difficult...but at least it sounds good on paper? or dpc forum space

Message edited by author 2005-07-15 14:43:00.
07/15/2005 06:13:06 PM · #33
When merging the original and the half-pixel shifted one, the pixel columns would have to be alternated -- that is, one column from the original, one form the shifted, etc. This would indeed double the number of pixels in the image, but I don't think you would gain much.

Acutally, since they are only shifted a half pixel, the pixels should overlap each other. This could be tried easily enough even with the same image duplacated for each layer.

steps:
- duplicate the image.
- resample each to twice the width while keeping the original height.
- trim 1 pixel from the left of one and one pixel from the right of the other.
- layer one on top of the other with 50% opacity.
- merge
- resample back to original size.

Like this:


But this does not gain any size -- and probably would lose more detail in resampling than gained by the addition of more pixel data.

David
Pages:  
Current Server Time: 12/26/2025 04:22:04 AM

Please log in or register to post to the forums.


Home - Challenges - Community - League - Photos - Cameras - Lenses - Learn - Help - Terms of Use - Privacy - Top ^
DPChallenge, and website content and design, Copyright © 2001-2025 Challenging Technologies, LLC.
All digital photo copyrights belong to the photographers and may not be used without permission.
Current Server Time: 12/26/2025 04:22:04 AM EST.