In this post you seem to directly link Zero Noise to HDR, I'm interested in learning more about it's workflow and specifics about it's process. You mentioned the importance of an 'efficient information collection stage', how does Zero Noise differ or operate in this regard? How does it's blending differ? In one of your examples you explained "scaled down using nearest neighbour to preserve the original per-pixel SNR", is this a Zero Noise thing or something you did just to use the photos as examples here.
I very much want to better understand this process. There are many different opinions and techniques in this processing and simply I want to know which creates the best results. It's hard to choose the appropriate questions because I don't know them yet. Could you shed some light based on your experience. And is there a Mac solution for Zero Noise!?
Marshallarts, the first thing one has to understand is what DR means in digital imaging. And we can talk about DR in 2 stages:
- Capture: capture all the scene's DR information
- And post processing: tone map the captured DR so the ouput device (let it be a print, a projection or just a JPEG image displayed on a monitor) shows the captured information
If we don't capture properly all the information,
or don't manage to map it so that we make it visible in the output device, we won't have a high DR image.
In the first stage, the limitation of the camera in capturing a given range of stops is defined by just one word: NOISE. Any camera is able to capture a certain amount of stops from the RAW saturation to the shadows where the information gets lost in noise. A camera that can capture more DR is a camera in which noise appears at deeper shadows, and there is not more to know about it.
Present cameras capture around 8-9 stops acceptably free of noise, so if the DR of the scene is higher we need to do several shots at different exposures and take the best of each (non-clipped highligts from the least exposed shots, and noise free shadows from the most exposed shots). And this is what ZN does: for every pixel, it simply picks the most exposed non-clipped value in the set of RAW files that was fed into it. If the winner is a value from any RAW file that was not the least exposed, its exposure is corrected down to fit the least exposed RAW's exposure.
The second stage is up to you. ZN performs no tone mapping at all, so the image it outputs looks disgusting, underexposed and dull (in fact it looks exactly the same as if you develop the least exposed RAW in ACR settting _all_ controls to 0), but has a very high quality in terms of noise and tonal richness. Taking the example of the guy climbing up the stairs, you can mix the 2 RAW files in ZN and then make several copies of ZN's output at different exposures as input for Photomatix; the result will be perfect (according to Photomatix style). But if you feed the two RAW files straight into Photomatix you get completely wrong results because Photomatix is sub-optimum and unpredictable at the information gathering stage. Its algorithms probably are not clearly split into an information collection stage, and a tone mapping stage, but instead perform all at the same time varying a lot the result depending on the number and separation of the input files.
The question of the nearest neighbour rescaling has nothing to do with ZN, I just used that (it is called 'Aproximation' resizing in the Spanish version of PS) to preserve the SNR of a 100% crop but displaying a much smaller size image. I didn't want the rescaling process to reduce noise in order to show what you can expect looking at a 100% crop.
BR