So ETTR is trying to put as much data as possible into the most significant bits, in effect lowering the noise floor.
My thought is to take the analog signal, high pass and low pass filter it, invert it, digitize all 4 segments (highlights. upper mid, lower mid and shadows), store each segment as a 16 bit word. I'm thinking if digitized at 2x the frequency the information would be stored in the most significant byte? In this way all four segments of the signal would have higher resolution. Your thoughts? I haven't thought it out thoroughly so don't be too critical just throwing the concept out to see if it has merit.
Oh, all rights reserved Marc McCalmont 2011 just in case it might have commercial merit.
As I understand it, the point of ETTR is to collect the maximum number of photons.
This sounds like details that would be relevant to the analogue->digital converter on the sensor or in the camera regardless of exposure. The final step in the above needs to be converting those four 16bit values into one 16bit value or the size of a raw data file is going to quadruple - not something that most people would enjoy.
one might argue that the shadows don't have enough levels and more in the lower greys might help too.
Dissing having more levels may be fine from a theoretical perspective, and yes, noise is the main reason to use ETTR. But...
In the real world (rather than the theoretical one) it is frequently the case that one wants to manipulate the image (open up shadows, for example, to reveal nuanced detail). If you've used ETTR and then "normalized" the image you now have many more tonal level in the shadows than you would otherwise have had.
It doesn't take much time in front of the screen to tell that the benefit of doing this is quite real. And, as with most such things, maybe not in the day to day, but most usefully in the extremes.
Michael
But, in practice, I am confident that I see smoother tonal gradations on ETTR "normalized" shadow areas rather than native ones when I need to strongly open up such tonalities.may be the software is just not doing a great job, that's it... 2x2 = 4, but if you software will display 5 - are you going to believe it ?
if you reduce the bit depth precision too much (and at the wrong point in the processing) you can indeed induce banding / posterization. I could certainly see that happening if you put a DAC on the sensor that lacks sufficient precision for the task at hand - say you put a 10bit or 8bit DAC on a modern sensor.
Graeme
So do I conclude that I'm right to be trying to use ETTR, even though I may be doing it for the wrong reason??? ???
Yes, ETTR is a bright idea, but avoid clipping. I had done some ETTR experiments lately and got bad clipping.
Supporting SNR as the main reason for ETTR:
1.- Suppose (as a reduction to absurd) a ridiculously large bit depth, almost approaching infinity. Then, any stop value will have more than enough values so that the argument about number of values looses validity. The only argument left is SNR.
2.- Take for example a camera like a Nikon DSLR where you could select either 12 or 14 bit RAW. The highest f/stop in 12 bits will have 2048 levels, the same as 2 stops under in 14 bits mode. Do you think you will get the same quality doing ETTR in 12 bits than 2 stops below-ETTR in 14 bits mode, just because you get the same number of levels? I don't think so.
I don’t think that works as well as you think.
Supporting number of levels as the main reason for ETTR:
1.- Suppose (as a reduction to absurd) a ridiculously small noise, approaching zero. Then, any stop value will have no noise, so that the argument about noise looses validity. The only argument left is number of values (levels).
2.- The highest f/stop would have no noise, but so would the lowest f/stop (and all others in between). Do you think you would get the same quality doing ETTR as 2 stops below-ETTR mode, just because you get the same absence of noise? I don't think so (you would get a combed histogram, possibly leading to visible banding).
What we need is sound theory that applies to actual current cameras, and these both produce noise and work with a fairly small number of levels. The current balance suggests that the main reason ETTR works is by increasing SNR, but that could change in the future.
Anyway, all of this may become moot if cameras ever expose different sensels for different amounts of time (recording the exposure times of course), for then all sensels would be exposed right (ER™) automagically with the lowest noise possible, and the number of levels would only matter in the final, converted, image.
On a side note, Guillermo and others seem to think there is no merit in recording noise more accurately. I think there may be. The more accurately noise is recorded, the better job noise removers can do.
Cheers!
Hi!
Most of the noise is coming from the variation of the number of incident photons. So the only way to make a sensor noise free is to make it very large. Whatever you do, there will be noise, however. Even if you detect all photons, it is well possible that there is no photon to detect. For that reason a noise free sensor is not possible, because light by itself has noise. It comes in quantum size packages called photons. If thousands of photons hit each sensor cell we have good statistics, that is low noise, if only a few photons hit the sensel we have poor statistics and noise.
Best regards
Erik
Cooled sensors linearize SNR across a very broad dynamic range so deep shadows have similar noise characteristics to midrange data. Cryogenic pumps or liquid nitrogen combined with slow readout make for some pretty spectacular images. Even low levels of cooling help, which is why small scientific grade cameras often use some kind of Peltier cooler.
Correct me if I'm wrong, but as far as I know, cooling the sensor reduces the read noise, but there is nothing you could do about shoot noise, which is an inherent property of light. Since photon noise is proportional to the square root of the photon count, then the higher the photon count, the higher the signal to noise to noise ratio
Francisco, shot noise is a property of the detector package. You reduce shot noise with cooling. You reduce read noise in various ways, primarily by slowing the readout. True 16-bit cameras tend to read out at about 1 kHz.
Really, none of this has much to do with whether or not to use ETTR. That depends on the exposure lattitude that you need. I was just suggesting that it is reasonable for shadows to open up better when ETTR is used.
Ok, does anyone disagree with the following:
When one exposes a zone II shadow as zone V (to use real photography terms ;)) and then subsequently places that detail back down into zone II in post-processing, that shadow is much 'cleaner' than if it had simply been exposed at zone II to begin with. No, there aren't any more tonal values in the shadow areas this way, but there is a qualitative difference.
The same results hold true in a slightly different way for higher tonal values where noise is less of an issue with a 'straight' capture to begin with. But there, too, my anecdotal experience is that the 'extra' data makes a difference.
If anyone has a different experience of this, I would like to know.
It makes sense that this is a result of obtaining data with a better s/nr at capture. But the 'why' doesn't really matter if the result is as described, does it?
- N.
"But the 'why' doesn't really matter if the result is as described, does it?" - not really. It's results that count :-) However, I always like to get to the bottom of things and I want to know "why?" <BIG SNIP>Either way you ETTR - but the "why" is different.
Graeme
Graeme, no need to get into a discussion DR. LOTS of that elsewhere. I will just make two statements and you can decide if one limits DR.over simplistic and misleading.
1. I have 12 bit precision across a 10 stop range.
2. I have 12 bit precision across 8 stops of a 10 stop range (ETTR).
On a side note, Guillermo and others seem to think there is no merit in recording noise more accurately. I think there may be. The more accurately noise is recorded, the better job noise removers can do.Can you show us an example where a noise remover did a better job thanks to having more levels?.
over simplistic and misleading.
The dynamic range compression is peripheral to all this but I will try to make up for the simplistic bit. Without ETTR a camera might give us a 1000:1 range between white and black clipping - at our desired level of precision. With ETTR it might be 800:1 because we concede that shadow areas are not as good as we would like. Of course, the camera response does not change. It still covers the same range it always did so I can understand the position that the device DR is the same. However, the image luminance range is not. It is narrower. You can't use ETTR with full range data.It is no secret that at black clipping we have weak data. If our dynamic range is less than the sensor, moving all the data to the right records that data with more precision and less noise. We do not lose any data in the highlights. The data is not full range, that's why you can use EttR. No one ever stated that you should use EttR with every image you take, and I believe anyone who uses it will say it has little use if the DR of the scene equals or exceeds the DR of the sensor. However, in a great many shooting situations with sensor dynamic ranges of todays cameras, you will have plenty of head room to move your data to the right. Yes, there may be no data at the black clipping point, but so what? You still have the same relationship of the white point of the scene to the black point of the scene. The fact they don't match the clip points of the sensor really doesn't matter.
Here we are again on a perennially popular topic with an acronym suggestive of aliens from outer space. ;DI don't think anyone suggests that EttR has replaced aperture/shutter speed/ISO as the priority settings in determining an exposure. All that is suggested is that an exposure based on EttR often will leave you with better raw data than the setting chosen by your camera, if you have the headroom and don't have to compromise those other areas.
Getting the best and most appropriate exposure for a particular shot has always been a basic technical requirement for serious photographers.
However, it needs to be stressed that the best exposure is not necessarily the exposure which maximises the photon count and produces the lowest noise in the shadows, ie. an ETTR.
The conditions for an ETTR are generally constrained by DoF requirements, subject movement and the intensity of available light (in the absence of flash).
In fact, I would say that achieving an ETTR, in the sense of maximising the photon count, may be the last consideration.
Choosing the appropriate aperture for the desired DoF, and a shutter speed sufficient to freeze both camera and subject movement, is surely of greater priority.
Only after having selected an appropriate shutter speed and aperture should one then address the implications of ETTR, which may mean increasing shutter speed at base ISO to avoid overexposure, or increasing ISO. With a camera like the D7000 or K5, there's really no need to increase ISO. If the desired aperture and minimum shutter speed for a sharp result, also result in an underexposure at base ISO, then so be it. It can't be helped.
Of course, if one has the luxury of time on one's side, if the subject is static, and the camera is on tripod, then there is surely no problem regarding 'correct' exposure.
The problem of ETTR arises when one doesn't have sufficient time to manually get the settings right for a particular scene because one is trying to 'capture the moment'. In these circumstances, an adjustable feature in the camera that would guarantee an ETTR could be useful.
However, such a feature would also have its own problems. It would be another camera adjustment to get right, and when it wasn't right, the shot might be ruined.
Can you show us an example where a noise remover did a better job thanks to having more levels?.
In the article: DO RAW BITS MATTER? (http://www.guillermoluijk.com/article/rawbits/index.htm) I developed some RAW files with a decreasing number of bits (i.e. I rounded RAW numbers before demosaicing to emulate ADC with less bits).
For the Canon 40D, 12bits showed to be enough. The extra 2 bits didn't improve useful information recorded:
Unfortunately there is absolutely no easy way to use EttR with current cameras. You have to shoot, examine histo, and adjust (and your guessing at that because the histo isn't telling you enough about the raw data).
I don't think anyone suggests that EttR has replaced aperture/shutter speed/ISO as the priority settings in determining an exposure. All that is suggested is that an exposure based on EttR often will leave you with better raw data than the setting chosen by your camera, if you have the headroom and don't have to compromise those other areas.
Its as if suggesting ETTR implies all the aspects of sound image capture we’ve been practicing for 100+ years is to be ignored or isn’t valid any longer. ETTR is about idealized exposure for raw data when you have the time and desire for idealized data, all other photographic practices still in effect.
Michael in his recent article on the topic of ETTR was lamenting the fact that manufacturers have not yet designed a camera that will guarantee an automatic ETTR.
My experience with EttR has been mixed. I have found that sometimes when I have a very narrow dynamic range (flat light), I can make a crummy lighting situation better by having more levels in the high range to be stretched in post processing.
On the other hand, I find more frequently that I don't like the results (particularly in skies). Sometimes this is because I have used the luminance histogram and missed the fact that I actually clipped one of the channels and not the others. This often creates unattractive cyan highlights in skies...yes, yes, I was too aggressive with EttR. I know that, but it can be easy to miss if you don't do some substantial experimenting with exposure and the RGB and luminance histogram.
When you have time, photographers should substantially underexpose the image to see if there are any spikes way off to the right of the graph. Of course, we should be able to see this with our eyes, but generally too much dallying around means you miss the shot.
As a woodland photographer, shadows are key, but perhaps more key is not blowing out the very small areas of light filtering between leaves. I have learned to accept that sometimes blacks are a part of the image. Going to extremes to rescue detail in those blacks doesn't always enhance the image particularly if I find very small highlights blooming in the branches of the trees. Yes, again I recognize that I was too aggressive with EttR, but this is my main point that when I have used EttR in woodland photography, I very frequently blow the shot with EttR.
I think someone said it above...If the dynamic range of the image exceeds the dynamic range capability of the camera (always in woodlands photography) then EttR doesn't help. When dynamic range is less than the range of the camera, EttR can vastly improve an image. The trouble is that in the moment of capture, it can be easy to misjudge these things.
As Ken mentioned in the other thread, what about color issues when you expose mid tones too far right?
In my view ETTR means is that we try to utilize the full histogram, thus essentially optimizing available DR and minimizing noise relative to signal.
"Regarding ETTR I already mentioned this, but let me rephrase - ETTR as universal approach for all kinds of shooting is wrong and unfortunately most of people treat it this way no matter what and how they shoot. It's easy to understand and there is even some technical explanation behind it, but in fact it doesn't tell the whole story. ETTR as approach when we are trying to open shadows without clipping highlights is a valid technique when needed and as long as we understand what we gain and what we loose there is absolutely nothing wrong with that."
But, I don't really agree with the warnings regarding color. I've not experienced that with ACR/LR.
The most obvious example I can think of is a blue sky where the red channel is partially clipped. The result is a sky that can look more cyan than it should.It's difficult that the red channel clips in the RAW file. If it does, the green channel will be clipped as well most of the times. So surely if the red channel got clipped and not the green channel, it happened at post processing.
It's difficult that the red channel clips in the RAW file. If it does, the green channel will be clipped as well most of the times. So surely if the red channel got clipped and not the green channel, it happened at post processing.
Taking readings from the left-hand corner I get values of 134, 151, 197. (...)
Look, if you know what you are doing, you'll know when and how to use ETTR...if you don't, go right ahead and flail about like regular people and leave image quality on the table–you're choice.That sums it up perfectly for me. Thanks, Jeff.
Readings from the RAW developer are irrelevant, you don't know what happened to your RAW values before they were displayed (exposure correction, white balance, highlight (recovery) strategies,...). Just open your RAW file into Rawnalyze and inspect the genuine RAW histograms to find out.
A RAW developer is not a tool suited to analyze RAW files, it is a tool designed to develop them.
Regards
However, the fact is that ACR and Photoshop are the programs I use to develop my images. I don't see much advantage in using one program to analyse my RAW images and another program to develop them.
I have no problem with the above...I don't suggest ALWAYS doing ETTR...only when it's appropriate. But, I don't really agree with the warnings regarding color. I've not experienced that with ACR/LR. Course, I'm pretty good at adjusting both tone and color with both. Can't comment on Raw Photo Processor cause I've never used it...maybe it's more of a problem for Raw Photo Processor than ACR/LR?
Look, if you know what you are doing, you'll know when and how to use ETTR...if you don't, go right ahead and flail about like regular people and leave image quality on the table–you're choice.
If anything, it may because of he has more discerning color expectations than you do, and I only mean that in regards to him probably having higher color expectations than ANYONE, and so it's tough to draw the line between theoretical and practice.
Hum...really? I actually got a chuckle out of that.
how do you know that do not "leave image quality on the table" (c) Schewe if you did not try RPP ?
I have no interest in testing/using other 3rd party raw processors...I'm kinda invested in ACR/LR ya know?nobody doubts the business side of that...
Honestly, it makes me a little nervous if the people working with Adobe haven't thoroughly investigated other converters like RPP, as I've been hoping they'd use it as an example on how to improve their converter in the future. My mistake.
One would think that trying other 3rd party converters would help in making the one you're involved with better. I'm a paying customer of LR3, but I still export raws to RPP all the time, and it certainly isn't because I want to add more steps to my workflow. Granted, I guess if I'd never tried RPP in the first place, my ignorance would be bliss.It might be nice if the developers would port RPP to Win7 so the other half of the photo world could give it a try. Until that happens, I'm an Adobe captive (though there is nothing wrong with that mind you).
Honestly, it makes me a little nervous if the people working with Adobe haven't thoroughly investigated other converters like RPP, as I've been hoping they'd use it as an example on how to improve their converter in the future. My mistake.
Actually, it's not at all unusual to specifically NOT use other products in the tech industry...makes it a lot easier to testify that another product had no influence in the development of a competing product. Same reason why I've personally never tested anything from NIK software while being involved in the development of PhotoKit Sharpener (still haven't).
It might be nice if the developers would port RPP to Win7 so the other half of the photo world could give it a try. Until that happens, I'm an Adobe captive (though there is nothing wrong with that mind you).
Yeah, unfortunately, it's only an OSX program, and it can't be easily converted. According to the makers, "I do not see Colorsync being implemented for Linux. RPP relies on several OS X mechanisms, and porting it to any OS that does not support those is a major effort, literally re-writing RPP from the scratch is needed."Too bad. I was always taught to keep machine dependent code separate from the underlying program code so that porting it would not be so difficult. I guess I will just have to take a pass at this.
Jeff, if you're not actively testing other converters, I would think that your opinion on the ETTR matter should be notated as being relevant only in the case of using ACR, no?
You may want to give Raw Therapee a shot. It is also very good.
Too bad. I was always taught to keep machine dependent code separate from the underlying program code so that porting it would not be so difficult. I guess I will just have to take a pass at this.
Hi Ray,
It might teach a valuabe lesson (without having to repeat it for every file), e.g. that Highlight Recovery should only be used after an adjustment of the exposure (and perhaps the brightness) slider(s). When Rawnalyse tells you that there are no clipped highlights, then why use the HR tool?
Cheers,
Bart
I don't see much advantage in using one program to analyse my RAW images and another program to develop them.
Making more extreme adjustments to my previous example, it seems clear that the blue sky is definitely blown out to a degree which ACR cannot rectify. However, I doubt that any other converter could do a more convincing job of reconstructing that lost data.
... ETTR is about optimal exposure for the data. If you clip and didn’t wish to, that was an exposure error.
We could then leave Plato's cave (http://en.wikipedia.org/wiki/Allegory_of_the_Cave) where ACR prisoners live, jump the wall and look into the real world of RAW. Just for academic purposes of course.;D Terrific, Guillermo!(http://img7.imageshack.us/img7/6735/cavel.jpg)
;D Terrific, Guillermo!I didn't know that JPEG, ACR, PS and other stuff was already used by the Greeks…
Correct...I never said anything to imply that I was referring to ALL raw processors. In fact, the only other raw processors I've looked into is the camera company's offerings and in the case of my P-65+, Capture One (which handles ETTR pretty much the same way as ACR/LR).
The ONLY raw processor I claim to be an expert on is ACR/LR...I kinda have to be an expert to write a book on the subject. (and I DON'T claim to be an expert on C1...just an average user :~)
Jeff probably an unfair question but as a user of a P65+ I would be interested to know if you prefer Capture One or LR for you Phase RAW files.
I didn't know that JPEG, ACR, PS and other stuff was already used by the Greeks…I think Plato was the first to urge photographing averything in RAW. He also insisted that only Philosophers should be allowed to use PhotoShop.
If you don't want to analyse that RAW file perhaps you could upload it to some fileserver and I'd do it for you. We could then leave Plato's cave (http://en.wikipedia.org/wiki/Allegory_of_the_Cave) where ACR prisoners live, jump the wall and look into the real world of RAW. Just for academic purposes of course.
I notice you have a Hotmail address. Is Hotmail still restricted to 10MB limits for attachments? The RAW file is 13.7MB.
Primary finding I have done with LR indicates that both cameras tested have a latitude for ETTR of 1.5 step when exposure is adjusted so we are just short of blinking highlights on step 10 of total 41 steps.
Guillermo,
I'd be interested in any quick method of resconstructing a blown sky, as in my example which is fairly typical of the problem of a shift from a natural blue in the darkest part of the sky to an unnatural cyan, then sometimes to a complete blow-out in the brightest part of the sky
I considered a gentle cyan hue in these contra light scenes as a sort of natural (deliberately) part of the sky transitions under such conditions. Leaving aside the big topic what the human eye and the camera see and crucially how both sides see it in many respects and also in these conditions, I just tried suppressing some cyan tones in the sky with strong highlights with what happened to be my last edit. I think I will involve these adjustments more in the future as I like the results of more uniformed sky in this sense.
I just looked at the referred image and while I see it’s a different case, the exchange brought me an interesting impulse, thanks.
Best,
Hynek
I much prefer printing from Lightroom even over Photoshop and I think C1's printing isn't great yet...fine for contact sheets. I also think C1's asset management is primitive...it'll be interesting to see what Phase One does with asset management now that they have Expression Media. iView Media Pro was a really good early asset management app that MSFT let falter. I hope P1 can bring it back...
I've read Emil's treatise, and have great respect for his work. And, I also understand the theory.
But, in practice, I am confident that I see smoother tonal gradations on ETTR "normalized" shadow areas rather than native ones when I need to strongly open up such tonalities.
So, who do I believe? As the old joke has it, "The experts, or my lying eyes"?
Michael
Ps: Bumble bees can indeed fly, and prove it to themselves every day.
I notice you have a Hotmail address. Is Hotmail still restricted to 10MB limits for attachments? The RAW file is 13.7MB.
I don't agree that avoiding this clipping is just a matter of always ensuring you haven't overexposed a highlight in one channel.
Because I don't believe that on the back of the camera you have enough information to know that you haven't lost data off the right side of the histogram.
(...)
as I increased the exposure to move to the right the shape of the histogram changes--the slope and magnitude of each peak changed. This suggests that the histogram isn't representing a linear transformation of the exposure data as you move to the right.
On reflection, and after a bit of experimentation, I get the impression that the degree of transition from blue to cyan in a partially blown sky has been reduced in CS5. Or perhaps it's the case that the number of controls in ACR, to enable such reduction, has been increased.
Here the areas where a RAW channel was clipped (note that the R channel was the only one intact across the entire scene):
Interesting! An excellent demonstration of the situation, Guillermo. Thanks. It makes sense that the red channel would be the last to be clipped in a blue sky, unless it were a sunrise or sunset.
I guess someone at Adobe has decided that a reconstruction of a blown sky that leans towards cyan is more acceptable than one which leans towards magenta.
I've generally found ACR to be either better than other converters at recovering highlights, or at least as good, whenever I've taken the trouble to make a comparison.
Making more extreme adjustments to my previous example, it seems clear that the blue sky is definitely blown out to a degree which ACR cannot rectify. However, I doubt that any other converter could do a more convincing job of reconstructing that lost data.
Hi Ray,
It might teach a valuabe lesson (without having to repeat it for every file), e.g. that Highlight Recovery should only be used after an adjustment of the exposure (and perhaps the brightness) slider(s). When Rawnalyse tells you that there are no clipped highlights, then why use the HR tool?
Cheers,
Bart
It depends how ACR does HR. The raw file may have no clipped highlights, but WB can send some channels past the white point.
A good HR tool should be able to distinguish channels that are clipped because of WB vs those that are clipped because the raw data is clipped.
Hi Emil,
A good point, althought the potentially best method to avoid data clipping due to WB would be a bit more -EV correction at the Raw conversion stage.
May I have the raw file to play with?
Here is an example that works particularly well with RawTherapee's (in the dev version, 4.0) recently revamped color propagation method of highlight recovery:
Here's RawTherapee on the first one. The Color Propagation recovery tool is still experimental -- I had to adjust the pre-demosaic CA correction manually to keep the CA in the image from infecting the highlight recovery. I'll have to fix that in the next iteration. The blown area is still a little too pink for my taste.
The second image is too blown to recovery anything in the sky -- there is no unblown region to inpaint from.
(http://theory.uchicago.edu/~ejm/pix/20d/posts/ojo/ray-blownsky.jpg)
Hi Emil,
That's amazing! Are you involved in the development of RAW Therapee? Version 4 looks as though it could be very useful. I look forward to using it.
Yes, I started revamping the image processing pipeline when it went open source a little over a year ago. We're slowly getting our act together...