DPChallenge: A Digital Photography Contest You are not logged in. (log in or register
 

DPChallenge Forums >> Photography Discussion >> I don't understand how to shoot for HDR
Pages:  
Showing posts 26 - 32 of 32, (reverse)
AuthorThread
02/12/2007 04:00:25 PM · #26
Originally posted by Bear_Music:



I can tell you what's an absolute DISadvantage: shooting/merging a series where the darkest exposure actually underexposes the bright areas, and the lightest exposure overexposes the dark areas. When you do an HDR merge that includes images on the extremes that are too bright or too dark for the darks/brights respectively, you end up with a muddy merged image that doesn't work at all.
R.

Originally posted by Bear_Music:


When you do an HDR merge, you are instructing the program where you want the limits to be. It then compresses those limits down to encapsulate everything else. End result, if you use too much range, is muddy whites and unnaturally bright shadow areas, with the midtones forced into a highly-compressed mode, which I don't see how, in general, is of any help at all. I am sure you can get some interesting effects this way, but we're sort of talking about the BASICS of the process here, aren't we?R.


Ah. I must have understood one of the two above statements, because they seem to say different things. I thought you were saying you see a problem when your underexposure pulls down the overexposure too far and vice-versa (in the initial captures that are being merged). I was suggesting an intermediate approach to merging the files to avoid that issue. The second instance you seem to be saying that there's a fundamental issue with images with too much dynamic range and mapping them into a visible range.

Message edited by author 2007-02-12 16:01:33.
02/12/2007 04:00:58 PM · #27
Originally posted by maggieddd:

Also, when you shoot in RAW, do you process them with all the same settings?


IN my experience, if you are shooting for realistic HDRI, it's very important to process all your variants to the same settings; that is to say, same contrast, same WB, same color shifts (if any), same amount of sharpening. I assume if you want some special effects you might vary any or all of these parameters (what would an HDRI image of a sunset look like with the bright exposure at one white balance and the neutral/dark exposures at different WB's?) but I have not tried this seriously.

R.
02/12/2007 04:04:35 PM · #28
Where I'm confused is, if a sensor can capture 5 stops of range then couldn't you then theoretically under expose by 5 stops, shoot a "normal" frame and over expose by 5 stops and have the full gamut? what advantage would you get shooting more in between frames that are already covered by your 3 exposures?
02/12/2007 04:06:56 PM · #29
Originally posted by Gordon:


Ah. I must have understood one of the two above statements, because they seem to say different things. I thought you were saying you see a problem when your underexposure pulls down the overexposure too far and vice-versa (in the initial captures that are being merged). I was suggesting an intermediate approach to merging the files to avoid that issue. The second instance you seem to be saying that there's a fundamental issue with images with too much dynamic range and mapping them into a visible range.


I don't know if I'm not being clear. It seems clear to me. Let me try again:

When you do an HDRI merge, you are instructing the software as follows:

1. "The DARKEST of these 3 images is what I want the bright areas to look like."

2. "The LIGHTEST of these 3 images is what I want the dark areas to look like."

3. "Please take these three images and merge them so the midtones are properly distributed between these two extremes."

Or words tot hat effect anyway. So if you cap your series with an under that's so far under that the highlights on the clouds, say, are rendered as zone 6 grays, then you are creating problems for yourself further down the road as you try to bring luminosity back. I know, because I have tried exactly this.

R.
02/12/2007 04:12:33 PM · #30
Originally posted by Megatherian:

Where I'm confused is, if a sensor can capture 5 stops of range then couldn't you then theoretically under expose by 5 stops, shoot a "normal" frame and over expose by 5 stops and have the full gamut? what advantage would you get shooting more in between frames that are already covered by your 3 exposures?


Well, that would be a 15-stop dynamic range... I think you mis-spoke here. But to answer the rest of the question, assume you were NOT using HDR at all, and had a single exposure that was more-or-less right for the midtones. Now you look at it critically and you see some areas in the midtones are better-rendered than others. You can do a curves adjustment that totally nails area "A" or area "B", but not both at the same time. In theory, when creating your HDRI composite, you can get a better rendering with more interim steps between the same extremes.

To look at it from a different perspective, consider this: if the interim step(s) are not important, why not just use TWO exposures, the over and the under? merge the extremes, forget the middle. But actually HDRI is all about local contrast enhancement in the midtones, basically. So in theory (I haven't verified this) the more mid-tone information you provide, the better will be the results.

R.
02/12/2007 04:23:54 PM · #31
Looks like a good time for an HDR tutorial Robert. ;) My ADD doesn't allow me to learn much while following these threads.
02/12/2007 04:30:41 PM · #32
Originally posted by Bear_Music:

Originally posted by Megatherian:

Where I'm confused is, if a sensor can capture 5 stops of range then couldn't you then theoretically under expose by 5 stops, shoot a "normal" frame and over expose by 5 stops and have the full gamut? what advantage would you get shooting more in between frames that are already covered by your 3 exposures?


Well, that would be a 15-stop dynamic range... I think you mis-spoke here. But to answer the rest of the question, assume you were NOT using HDR at all, and had a single exposure that was more-or-less right for the midtones. Now you look at it critically and you see some areas in the midtones are better-rendered than others. You can do a curves adjustment that totally nails area "A" or area "B", but not both at the same time. In theory, when creating your HDRI composite, you can get a better rendering with more interim steps between the same extremes.

To look at it from a different perspective, consider this: if the interim step(s) are not important, why not just use TWO exposures, the over and the under? merge the extremes, forget the middle. But actually HDRI is all about local contrast enhancement in the midtones, basically. So in theory (I haven't verified this) the more mid-tone information you provide, the better will be the results.

R.


I was using the 15 stops of range exmaple as an extreme - what I was trying to say was theoretically you could get 15 stops of exposure from 3 images. So what this really comes down to is where does the camera start losing detail. If it starts losing information at 1 stop over/under middle grey then you would benefit from more than three exposures in a +2/-2 HDR situation and you wouldn't get any added benefit in a +1/-1 situation.
Pages:  
Current Server Time: 09/07/2025 04:25:22 PM

Please log in or register to post to the forums.


Home - Challenges - Community - League - Photos - Cameras - Lenses - Learn - Help - Terms of Use - Privacy - Top ^
DPChallenge, and website content and design, Copyright © 2001-2025 Challenging Technologies, LLC.
All digital photo copyrights belong to the photographers and may not be used without permission.
Current Server Time: 09/07/2025 04:25:22 PM EDT.