So, why doesn't it make sense to use ETTR for the light image and exposure to the left for the dark one to keep the noiselevel as low as possible ?
That is, that your darkest point is exposed to a middle value and your brightest point is exposed to a middle value.
In fact, a good starting point to determine the ideal bracketing series is to achieve perfect ETTR, and from that exposure do additional higher exposure shots (at 2 or 3 stops intervals, less is unnecesary, more is risky) until the deepest shadows of interest get a sufficiently high exposure.
I occasionally have "inversion rings" around the sun when shooting hand-held and using Photoshop to do the synthesis (using one of the "natural" presets and deghosting). Any advice on this? (see attachement. First is processed image, 2nd is middle exposure)
while I realise that LuLa is the internet home of the photographic measurebators
I think that's an aggressive and insulting word. This is a community and you should show your fellow forum members a little more respect.
As someone who seems to be a stickler for semantics, you should mind your words more carefully.
I have a sufficient understanding of what the term connotes, Jeremy & Bill. I use it the way I use it with that knowledge in mind.
Then I guess you are just an asshole?
Do a search on 'measurbator' and you will build in a second a list of people with a strong inferiority complex regarding the technical aspects of digital photography.
You know, we may actually agree on something GL. Imagine that? ;D
I agree that in an perfect world what you're advocating would be correct. However what I've done is build in a bit of an 'error factor'. Figuring out that precise ETTR point can be difficult for some who aren't as familiar with the concept or haven't done the necessary testing with their cameras to figure out where that point is. So my goal was to give folks enough room for error that they could still end up with a good result (i.e., no blown highlights, no blocked shadows). You can never shoot too wide a bracket set but you can shoot to narrow a bracket set. And while I realise that LuLa is the internet home of the photographic measurebators, it's entirely likely that there are people who read this thread who may not fit into that group; hence the repeating of the same philosophy here. It's the difference between writing for a more general audience and writing for the hardcore measurebators.
So, in order to create that 3 shot, 3 stop interval, manual intervention is needed. Touching the camera in creating a bracket set for HDR introduces all kinds of possibilities for camera movement between shots. I'd much rather shoot the AEB at 1 or 2 stop increments without touching the camera than the 3 shot bracket having to fiddle with the exposure between shots. Technically the 3 shots, 3 stops apart may be all that are needed. Practically it presents problems.
In our host's Camera to Print and Screen tutorial, he states that he often shoots HDR hand held with good results using auto-align
How are others handling this in the field?
Bill
Is the RAWs developed to TIF before HDR in photoshop? If so my guess is that the problem is that around such light source which is spread out due to clouds/haze there will be quite large areas where one or more but not all channels are clipped, which means that highlight reconstruction will take place in raw development. Highlight reconstruction aims to produce a good-looking result, but may not be that "true", for example you could get a darker highlight than it should be. When the HDR software then should combine these processed files it may not be able to match together the results. It is better to let the HDR software process the RAW file directly, then it can see exactly where clipping occurs and not take that into account when merging together the files.I am using the "export to HDR" option in Lightroom so that the images magically appear as a merged one in CS5. I would assume that given such control, Adobe would be able to take the technically best choices in the bacground.
If photoshop works directly on the raw files then I don't have a guess what the problem is though...
I did learn something about my camera by participating in this thread, and began to think how I would use its features to implement HDR. My main camera (Nikon D3) allows AEB with up to 9 shots, but only at a maximum of 1 EV steps. If only 3 shots are needed one can set the camera to take shots at the nominal exposure, +1 EV, and +2EV. If 5 or more exposures are set, it brackets up and down around the nominal exposure, but one can use exposure compensation to counteract that so it effectively brackets only above the nominal exposure.
[...]
Alternatively, one could make manual increments of 2 or 3 EV above the base exposure, but this would require touching the camera. One could use autoalign to take care of camera movement. In our host's Camera to Print and Screen tutorial, he states that he often shoots HDR hand held with good results using auto-align, so touching the camera on a stable tripod might not be all that bad.
The issue is, and it's particularly prevalent here on LuLa, that there is a not insignificant number of people who feel that in order to have any success as a photographer in the digital realm one has to have an absolute and complete knowledge of the science.Who are they, and can you point to any posts supporting your claim?
...I fully agree on this.
Further I'd suggest that an over-reliance on those things and getting too caught up in the pursuit of 'accuracy' or 'perfection' can be detrimental to one's development as a photographer.
I am slightly concerned about (shot) noise differences at the exposure transition zones. With e.g. 4 EV exposure differences, the noise varies by a factor of 2 (although it will be reduced some by gamma conversion), which could easily be picked up by the ensuing tonemapping operations, especially unwelcome on smooth gradients.
This is a genuine RAW histogram in EV divisions from my old Canon 350D (12-bit camera, 8 stops of effective DR, that is below any modern camera) from a 12 stops scene. Any optimum HDR software, given a 3 stop bracketing {0, +3, +6} (the histogram shows the 0 capture which corresponds to ETTR), should only use the 3-4 upper stops of the whole DR of the camera:RegardsI think what you are saying could be made clear with an analogy (if I may):
A circle may be described as a polygon in the limit N->inf. From your experience, using a finite number N (e.g. 3 stops) is sufficient so that no further visible benefits can be had by increasing it.
Something like that. Less shots means a worse (lower) SNR threshold (i.e. there will be more visible noise in the noisiest parts of the composite). But if shooting at 3EV intervals that threshold is still invisible, no need to go to a higher SNR threshold through a narrower shooting interval.
In a modern camera (>9 stops of effective DR), using just the upper 3-4 stops means discarding as many as the lower 5-6 stops of captured information. That means we only make use of la crème de la crème of the captured information. Shooting at 1EV intervals we would only use the upper 1-2 stops and reject the lower 7-8 stops; doesn't it sound as a waste?.
In addition to that, depending on how progressive is the fusion software, many shots can easily reduce overall sharpness if many shots are used and they are not milimetrically aligned, because big parts of the final image can end being a weighted average of more than one input RAW file.