DPChallenge: A Digital Photography Contest You are not logged in. (log in or register
 

DPChallenge Forums >> General Discussion >> dual sensor to induce HDR?
Pages:  
Showing posts 1 - 13 of 13, (reverse)
AuthorThread
03/14/2006 02:17:00 AM · #1
OK it's been a slow evening. Was thinking, wouldn't it be cool if someone created a camera with dual sensors that actually overlays 2 RAW files to create a HDR image?

kodak570 here does not do that, but it was what sparked my imagination.

What do you think?
03/14/2006 02:19:17 AM · #2
I think it would be very expensive, is what I think. And who wants the CAMERA to be making those merging decisions anyway? I'd rtaher do it myself and fiddle them to best advantage...

R.
03/14/2006 02:22:31 AM · #3
I'll have to agree with Bear.

Technology is advancing fast, the dynamic range of single sensors will get better anyway.
03/14/2006 02:27:08 AM · #4
Originally posted by Bear_Music:

And who wants the CAMERA to be making those merging decisions anyway? I'd rtaher do it myself and fiddle them to best advantage...


Why of course, Robt.
The automatic in-camera processing is just an option.
OK, what if the camera also allows RAW output of both images, one from each sensor? Doesn't that interest you a bit?

now the question is, would we need double shutters too? hmm... anyone got the blue prints from the Kodak dual-retina camera for reference's sake? Does it have dual shutter too?
03/14/2006 02:27:10 AM · #5
Olympus E330 dSLR has 2 sensors. One for taking the picture and one for the live view.
03/14/2006 02:28:35 AM · #6
Originally posted by faidoi:

Olympus E330 dSLR has 2 sensors. One for taking the picture and one for the live view.


Yes but it doesn't output any RAW data ;)
03/14/2006 02:29:20 AM · #7
The Fujifilm S3 Pro is sort of like that. It uses 2 6 megapixel sensors to capture 12 mp.
03/14/2006 02:30:41 AM · #8
You can get essentially the same effect by setting any camera to Auto Exposure bracketing, as long as the subject and camera are both stationary.
03/14/2006 02:32:08 AM · #9
Semi off topic, but I saw that dual leens Kodak in the store today. Feels cheap and form-over-function-ish, which is a shame because I wouldn't mind one for the wide angle lens and video clips.
03/14/2006 02:33:14 AM · #10
Originally posted by visual28:

The Fujifilm S3 Pro is sort of like that. It uses 2 6 megapixel sensors to capture 12 mp.


I was reading your link. It says "6.45 million total photosites (2 photodiodes per photosite)" so that explains the 12mp (interpolated) RAW file, but since they are on the same sensor, maybe if they can engineer it so that each alternate photodiodes has a different sensitivity (so same shutter speed could be used), it might bump the DR a bit?
03/14/2006 05:45:16 AM · #11
Originally posted by crayon:

Originally posted by visual28:

The Fujifilm S3 Pro is sort of like that. It uses 2 6 megapixel sensors to capture 12 mp.


I was reading your link. It says "6.45 million total photosites (2 photodiodes per photosite)" so that explains the 12mp (interpolated) RAW file, but since they are on the same sensor, maybe if they can engineer it so that each alternate photodiodes has a different sensitivity (so same shutter speed could be used), it might bump the DR a bit?


That's exactly what it does.
03/14/2006 07:25:43 AM · #12
Fuji 610 from some time ago.

Message edited by author 2006-03-14 07:26:08.
03/14/2006 08:33:49 AM · #13
I had the same feeling as you Madman. It's a totally sweet idea, implemented in the most brain-dead fashion... It's a little bit like the Mini RolleiFlex which SHOULD have been a 6mp CMOS sensor with an ultra-simplified, full manual, dual lens reflex camera, but instead is a glorified and overpriced phone camera without the phone...

Must we always be the butt of these manufacturers' jokes?

RE the Fuji S3, Bobster said it best.

There aren't two sensors, it's one sensor with two different sets of photosites. Large ones and small ones. That's more like having two sensors in one, interspersed. Large photosites collect lots of light, small ones collect less. The results are then interpolated. In a raw file, the information is intact and can be used to intelligently interpolate to increase the DR.

Interpolating here being defined roughly as using partial information of adjacent photosites to build a full color, full range point of information (as in a 24 bit pixel with 8 bits per channel or better)

The S3 does a fantastic job of this.

The other thing to remember in using two sensors to make HDR is that you are dealing with something of a physical impossibility. You can't use two lenses to take a picture of the same spot because their respective perspectives are different (sorry if that sounds silly :). You will lose definition because of this. The only way around this might be trickery with mirrors. Unfortunately, this would lead to some very odd effects such as light fall-off graduating to one side of the picture as well as weird distortion effects. It could all be corrected for, but then you have less real information being represented in your photograph. This leads to noise and an artificial look to the picture.

The only other way of doing this would be to use a single lens with a partially reflective mirror (such as in the Olympus 330 which allows partial transmission of light) to allow a second in-plane sensor (a sensor which is the identical distance from the lens optics, but physically removed) to capture the same image. Again, this is a poor solution because you can't take more information from a scene than there is light. This is the core of photography. With two sensors like this, you will end up with less light, meaning the picture itself will receive less light. True, you will get more overall dynamic range, but everything will have to be exposed longer because you will be sharing the light between sensors.

The Fuji Solution was to use two sizes of photosites and made a lot of sense for a lot of reasons. The information isn't lost because the whole sensor had photosite coverage (it's not 100%, but it's within normal levels). This means that light wasn't shared. It was just seen differently by different sized receptors.

It's a fantastic idea, but it comes with a single major problem. It also increases in-camera processing because you are either dealing with RAW (slower because file sizes are much larger) or .JPG which requires a fair bit more work to get the information evaluated and combined.
Pages:  
Current Server Time: 04/23/2024 07:02:25 AM

Please log in or register to post to the forums.


Home - Challenges - Community - League - Photos - Cameras - Lenses - Learn - Prints! - Help - Terms of Use - Privacy - Top ^
DPChallenge, and website content and design, Copyright © 2001-2024 Challenging Technologies, LLC.
All digital photo copyrights belong to the photographers and may not be used without permission.
Current Server Time: 04/23/2024 07:02:25 AM EDT.