Given the recent discussions about dynamic range and handling scenes of large subject brightness range, some folks might be interested in a new series of articles by Uwe Steinmuller of Digital Outback Photo (http://www.outbackphoto.com), published at DPR. Here is part 1: http://www.dpreview.com/learn/?/Guides/The_art_of_HDR_Photography_part_1_01.htm
Human vision works in quite a different way to our cameras. We all know that our eyes adapt to scenes; when it gets darker our pupils open, and when it gets brighter they close. This process often takes quite a while (it's not instant). It is said that our eyes can see a Dynamic Range of 10 f-stops (1:1024) without adapting the pupils and overall about 24 f-stops.
Given the recent discussions about dynamic range and handling scenes of large subject brightness range, some folks might be interested in a new series of articles by Uwe Steinmuller of Digital Outback Photo...
It´s a matter of taste, right? I don´t like extreme HDR as seen in the article but a lot of folks do.
It's not only a matter of taste but a matter of skill in image processing
Here's an example
Yeah, ok...but ya know, if an image looks surreal, (as in an obviously condensed tonal range) I'm not sure that is particularly interesting (nor useful) for people who want a reasonably realistic representation of the original scene...
Most HDR type stuff looks phony...and while it may be trendy, it's not really all that desirable, is it? Really?
Just saying...
It's ok to look at something and say it looks like crap, if it is...
I'm glad I'm not the only one...
The very first photo (the arches) has all life squeezed out of it, and scrolling down to one of the originals straight out of the camera is much better. I'm not saying it couldn't benefit from HDR or related techniques (digital blending or exposure fusion), but it's clear that the article is not written with realistic results in mind.
It's not only a matter of taste but a matter of skill in image processing; that is, the ability to adjust the tonality and hues to taste so that the image looks natural. This is something I would have thought Jeff Schewe would have no trouble doing.
If it looks phony, it moves out of the photographic realm and into an illustration realm. Just cause modern tools make is "easy" to do something doesn't make it desirable. Actually, the same could be said for a lot of digital imaging techniques...just cause you CAN do it doesn't mean you SHOULD do it, ya know?
G´Day Ray!
Of course it is a matter of skills. Totally agree. Steinmuller and his wife are very good at this. However, I find some of their HDR images a little to extreme for my taste. They do not look real to me. But as I said, it is a matter of taste. I don´t agree with Schewe that this is crap. It´s like art, I don´t like baroque but I like impressionists like Claude Monet. It´s the same with HDR.
Nice examples you put up. Waiting to see some good examples from your D7000 ;)
- John
I agree that because you can do it doesn't mean that you should. However, to give people like Uwe Steinmuller the benefit of the doubt, I think it's likely that sometimes the photographer may just be demonstrating what's possible with regard to increased DR and lower shadow noise, in the clearest and most obvious manner so all can see, even the untrained eye.
......... but I saw it as an example of that you SHOULD NOT do not what you would WANT to do.
I can't agree with that, Jeff.
So, you think Uwe's opening shot with the arches is a shining example of a good use of HDR? Hum...we looking at the same image bud?The way it's captioned, it doesn't look like he meant it to be an example of bad HDR. If that was the intention, I think it was poorly communicated. Frankly, I'm no longer surprised when I see articles about getting natural results with HDR tonemapping in which the examples look bad. It seems to be the norm, rather than the exception.
So, you think Uwe's opening shot with the arches is a shining example of a good use of HDR? Hum...we looking at the same image bud?
It's ugly even to our eyes, and HDR isn't going to fix change that. So while there are times when HDR or other exposure blending techniques can be useful, the simple fact is that HDR cannot save an image shot in crappy light, no matter how much one twiddles with the sliders in Photomatix.
No. I'm just defending a person's right to produce whatever type of image he wants, irrespective of certain peoples' opinions of its merit.
So you are ok with somebody advocating crap, right?
I just want to be perfectly clear here...you think Uwe is doing a public service by teaching people to take a crap image and process it via HDR to get an HDR piece of crap image, right?
I'm fine with people making whatever imagery they want to make in the privacy of their own artwork.
But, I have a problem when somebody touts themselves as some sort of expert and advocates an approach to photography that produces imagery that is, substantially less than useful...or furthers a process that is far more complex and difficult to do well that some tutorial on the web seems to indicate. It takes talent and effort to do a proper tonemapping that doesn't look phony.
Come on, truth be told...do you honestly think his tutorial is really useful or are you simply trying to find some sort of point to argue with me?
In this version the highlights show detail, the shadows are not blocked and the flatness is gone. This would be not our final version. We usually optimize the photo in Photoshop CS5:
In that case the both of you ;) are alone in assuming that HDR images, and more importantly the subsequent tonemapping, are "not interesting (nor useful) for people who want a reasonably realistic representation of the original scene...".
Sure, one can (very easily) produce crappy pictures using these techniques, but one can also achieve realistic results that cannot be achieved with other techniques (unless one does timeconsuming manual exposure blending/masking).
I somewhat agree with your observation about "the life squeezed out of it", but I wouldn't confuse one person's processing preferences with the capabilities to produce vastly different (more to your liking) renderings of the same base images. Tonemapping is as much an Art as it requires technical skill.
So you are ok with somebody advocating crap, right?
Yeah, ok...but ya know, if an image looks surreal, (as in an obviously condensed tonal range) I'm not sure that is particularly interesting (nor useful) for people who want a reasonably realistic representation of the original scene...I agree that there is a lot of artistically worthless "because I can" HDR stuff out there. And for my tastes, Uwe has bought the highlights down a bit too much in his examples --- though perhaps for illustrative rather than artistic purposes.
Most HDR type stuff looks phony...and while it may be trendy, it's not really all that desirable, is it? Really?
However I was rather impressed with an HDR article here on LL by Alexandre Buisse.
Since there are exmaples that HDR can make natural images and all(?) the published descriptions don't seem to work does that mean the "secret" method is too valuable to share-- or are natural images just a result of trial and error?
The majority of Uwe's HDR work seems to have that "fake" look. And I must say, I am no great fan.
However I was rather impressed with an HDR article here on LL by Alexandre Buisse.
http://www.luminous-landscape.com/essays/hdr-plea.shtml
I struggle to tell which shots of Alexandres are HDR. Well done!
Ray,
They are very saturated. Did you crank up saturation or did you just happen to have a colorful evening? ;) The scene brightness is certainly very high though.
Ray, you could have gotten a few things out of using HDR in those images.
By supersampling the image into 32-bit space, you could have increased the fidelity on the low tones after tonemapping, and sculpted the shoulder on the highlights, perhaps to be a bit more like slide film. I think both of these would have been extremely beneficial. There was a lot of good detail in the trees that would have been less vague, and you could have preserved a bit of the gradation in the sunset. You could have done all this without intrusive artifacts.
Great shot anyway Ray!
Hmm, when Schewe used the word surreal in his first post, I thought he was using the word loosely, as most people do. But Uwe's first image is a lot like the surrealist painter Georgio de Chirico's painting of similar scenes...It had never occurred to me that de Chirico was an HDR painter, but he was. 8-)I believe that some painters used color hue/saturation to overcome the dynamic range limitations of the medium? The work of painters is perhaps an important clue to how real scenes should be mapped to limited media in a way that happens to agree with human taste.
JC
Since there are exmaples that HDR can make natural images and all(?) the published descriptions don't seem to work does that mean the "secret" method is too valuable to share-- or are natural images just a result of trial and error?
I think it's a little unfair that HDR gets such a bad rap when people are Topazing the shit out of images and no one seems to bat an eye. Or when guys like Dave Hill develop a processing methodology that creates anything but a realistic look and people fawn over it. Doesn't make a lot of sense. What it does do; however, is provide further proof that the appreciation of 'art' and what is or isn't 'art' is entirely subjective.For one, I would expect 'art' to be something different, or more than 'a realistic snap of a scene'. B&W? Filmgrain? non-linear response of film? Eerie long exposures of waves that does not map to anything that I can see with my bare eyes?
For one, I would expect 'art' to be something different, or more than 'a realistic snap of a scene'. B&W? Filmgrain? non-linear response of film? Eerie long exposures of waves that does not map to anything that I can see with my bare eyes?
If anything 'non-realistic' is bad, then a lot of photography is bad. If some non-realistic photography is good, then no photography should be dismissed for being non-realistic. Perhaps for being 'to radical compared to what we are culturally used to', or 'too easy to accomplish for casual users and therefore not worthwhile', or simply 'not according to my taste'.
-h
You have managed to sneak in, possibly "under the radar," a profound reminder on the subject of photography as art, which is regrettably somewhat rare in this forum. Many thanks.
For one, I would expect 'art' to be something different, or more than 'a realistic snap of a scene'. B&W? Filmgrain? non-linear response of film? Eerie long exposures of waves that does not map to anything that I can see with my bare eyes?
If anything 'non-realistic' is bad, then a lot of photography is bad. If some non-realistic photography is good, then no photography should be dismissed for being non-realistic. Perhaps for being 'to radical compared to what we are culturally used to', or 'too easy to accomplish for casual users and therefore not worthwhile', or simply 'not according to my taste'.
-h
"If our cameras could capture high dynamic range scenes in a single shot we wouldn't need the techniques described in these articles."
"Today's Monitors: 1:300-1:1000 -> 8,2-10 stops
HDR monitors 1:30000 (watch your eyes, may get stressed) -> 14,9 stops
Printers on glossy media: about 1:200 -> 7,6 stops
Printers on matte fine art papers: below 1:100 -> 6,6 stops"
A bit dissapointed to read this:Surprised to read this from you! One very good reason for bracketing exposures is because of the properties of supersampling a scene into 32-bit space. Just the increase in fidelity on the lowest tones is worth the effort. Certainly, the newer cameras will be quite accurate at quantizing the lowest tones in a scene into 2-3 bit quantities, but only with supersampling do you stand a chance of increasing the resolution of those tones. Of course, if one is shooting digital as though it were slide film, this might matter less. But to the rest of us, it matters.
It makes me think Steinmueller didn't really get that the point of bracketing for HDR will soon be unnecessary, and it does not participate in the definition of HDR itself. The only reason we have today for bracketing HDR scenes is that sensors are still too noisy to capture in a single shot the entire DR of many real world scenes.
One very good reason for bracketing exposures is because of the properties of supersampling a scene into 32-bit space. Just the increase in fidelity on the lowest tones is worth the effort.Well, did you read and see this (http://www.luminous-landscape.com/forum/index.php?topic=49200.msg409770#msg409770)? Seems that 16bits already allow a fair amount of margin.
BTW from my experience I think Steinmueller's DR figures for the output devices are too optimistic:
I have measured real DR in normal observation conditions (i.e. ambient lighting) and my HP LP2475W monitor yielded 6,7 stops (http://www.guillermoluijk.com/quickwin/mpdrange/monitor.gif) (vs 8,2-10), and a printed copy on Fujifilm glossy paper yielded 4,3 stops (http://www.guillermoluijk.com/quickwin/mpdrange/papel.gif) (vs 6,6-7,6). I'd love to find out what an HDR monitor looks like!.
Regards
John, you've made the same mistake as many others. You've done it with respect to art in general as opposed to the ones who address HDR specifically. You've imparted your objective position onto a subjective subject. And that is what's wrong. And that's not a subjective issue. HJ has suggested what he 'expects' art to be. An expectation isn't a hard and fast, objective construct. Anything that captures or freezes a moment in time isn't realistic. If I can't go to that place and see exactly what is in that photo or painting or movie or drawing or 3D rendering then it's not realistic. The only true realism is what I, or anyone else, can see with my own eyes. I can choose to believe or not the reality someone else saw and the way they present that reality to me and accept it as real but it's not truly real to me.
Surprised to read this from you! One very good reason for bracketing exposures is because of the properties of supersampling a scene into 32-bit space.Bracketing will always mean an advantage in minimising noise and having greater tonal richness (BTW no need of 32-bit floating point formats for that, a 16-bit integer with gamma can encode 99,99% real world HDR scenes. Try to download this TIFF file: superhdr.tif (http://www.guillermoluijk.com/download/superhdr.tif) that can be pushed 12EV without noise or posterization).
Compressing that wide dynamic range to fit naturally on a medium such as monitor or print is the problem.I would add this is a problem that will never have a 100% satisfactory solution, just different approaches closer to the ideal goal, and always subject to the user's subjective opinion.
It's a problem that requires skill in image processing, as well as sophistication of software.
I think you're wrong on almost all of this.I dont know him. Does his images look like they were taken at any random place at any random time, or does it look like he has carefully chose time, place and camera settings to make a visually pleasing image?
Why should art be something different than you can see with your eyes? Jeff Wall takes high-resolution, very naturalistic photos of scenes that he creates much as a movie director does, but what you get in the photo is exactly what was in front of the camera. The art is in the creation, not in what the camera does.
Nobody said everything non-realistic is bad. You've set yp and knocked down a straw man. If some non-realistic photography is good, you can still dismiss other non-realistic photography as bad, even for no other reason than it's non-realistic. The question is, does the work succeed in its own terms? If somebody says, "We used HDR to increase realism in this photo," and they didn't increase realism, then they failed in their own terms. Sometimes, that's hard to tell, but it usually isn't.You are mixing arguments here. If "lack of realism" is a valid argument against some art it should be a valid argument against all art. If the critique is that it "is not suceeding in its own terms", then that it the argument that you should use.
I would add this is a problem that will never have a 100% satisfactory solution, just different approaches closer to the ideal goal, and always subject to the user's subjective opinion.Just like camera sensor DR is being improved, I believe that display DR is being worked on. I dont know about paper.
Bracketing will always mean an advantage in minimising noise and having greater tonal richness (BTW no need of 32-bit floating point formats for that, a 16-bit integer with gamma can encode 99,99% real world HDR scenes. Try to download this TIFF file: superhdr.tif (http://www.guillermoluijk.com/download/superhdr.tif) that can be pushed 12EV without noise or posterization).
Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.
Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.
The benefits of encoding in a 32-bit floating point space might not be so keenly felt in the higher tones. But consider the 2-3 bit quantization for the lower tones in a single shot capture. The color palette collapses into dither as you go lower. But if you bracket and move to HDR space, you can expand that palette for purposes of post processing, and then decide where you want to map it on the tonal scale without significant loss of fidelity.
Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.
While that's an intriguing proposition, it amounts to not much more than mental masturbation.For something to be mental masturbation, I would have to do it in solitude, and not on a public forum, I think? :-)
Just like camera sensor DR is being improved, I believe that display DR is being worked on. I dont know about paper.About paper, and any display relying on reflected light rather than transmitted or emitted, I am fairly sure that the brightness range displayed will stay well below what even the humblest SLR photosites are capable of recording. For one thing, the lowest reflectivity of any natural substance is about 2%, so the range from that to perfect 100% reflectivity is only about 50:1, or under 6 stops. Short of exotica like printing black with carbon fiber nanotube material (as in NASA's new super-black coating for flare control in telescope lenses), even 8 stops is out of reach of prints.
My point was that "HDR"*) is controversial among photographers. Some think that it is the best thing since sliced bread, while others think that it is horrible. Some of the last cathegory will claim that HDR looks unrealistic (implicitly saying that regular LDR looks realistic). I dont think anyone can argue from mathematics that one or the other is more similar to the original - HDR preserves some aspects of the true scene, while regular LDR preserve other aspects of the true scene. So we are left with arguing what "looks more similar to me". We cannot throw out 100 years of cultural baggage instantly, but culture may change in years (while the human visual system may need 100 generations to change significantly). Therefore, the answer to my "mental masturbation" could tell us if HDR may be the accepted norm in 10 or 20 years, or if it will be a quickly passing fad.
reflectivity is only about 50:1, or under 6 stops.
It's pretty common to have tonal inversions in these types of images, [...]One may find some tonal inversions (or shifts at least) in human vision too, see the well-known checkerboard :
I'm not sure everyone realizes that the HDR technique involves a move into the space of absolute magnitudes, and away from relative white-black point of a single capture. This is a conceptual shift. I think some here are carrying over the assumption that HDR is just another tool for doing LDR, but the conceptual shift is more significant.
Sure, if sensors could capture 16 or 18 stops of brightness with absolute perfection in real world (as opposed to the lab) conditions it may reduce the need (although not eliminate) for HDR. But if ifs and buts were candies and nuts we'd all have a Merry Christmas too. The fact is cameras can't do that and while some may say it's inevitable - and it may be - my bet is it won't happen in the next 5 years so until then we use the tools we have at hand to the best of our abilities.
I dont know him. Does his images look like they were taken at any random place at any random time, or does it look like he has carefully chose time, place and camera settings to make a visually pleasing image?
JC: Who knows? And what difference would that make?
If he in any way is "putting his soul" into his image, I would say that that could detract from the realism but add to the artistic value.
JC: It could detract from the realism, but add to the artistic value, but then again, maybe not. A person with an inane vision could put his soul into a work and have it come out...inane.
BTW, do you think that art should be valued from the end-result alone or does knowledge of the process add/subtract to its value?
JC: Could be either one.
If I show you an amazing image that blows your socks off (purely hypothetically speaking), would you be any less impressed if I told you I had made it purely in Photoshop?
JC: Probably. But that's just me. Other people might regard it as great art.
Or is the ideal that one should wait for weeks in a cold, deserted place waiting for "just the right light" and then capture that magic moment right before the batteries run out and being tragically eaten by a bear?
JC: I don't think there is an ideal.
You are mixing arguments here. If "lack of realism" is a valid argument against some art it should be a valid argument against all art.
JC: Really? If an argument against one woman is valid, is that an argument against all women? Frankly, this suggestion makes no sense at all. I'd heap further ridicule on it, but that that would take too much time.
If the critique is that it "is not suceeding in its own terms", then that it the argument that you should use.
JC: That is more or less the argument that I use, except that even if it does succeed in its own terms, it may not be art. My cat snapshots succeed in their own terms, but they remain cat snapshots. But if someone takes a stab an producing art, and the effort fails in its own terms, then it probably isn't high art.
You second statement statement seems irrelevant to what I said.
JC: I would disagree.
-h
John, you've made the same mistake as many others. You've done it with respect to art in general as opposed to the ones who address HDR specifically. You've imparted your objective position onto a subjective subject. And that is what's wrong. And that's not a subjective issue. HJ has suggested what he 'expects' art to be. An expectation isn't a hard and fast, objective construct. Anything that captures or freezes a moment in time isn't realistic. If I can't go to that place and see exactly what is in that photo or painting or movie or drawing or 3D rendering then it's not realistic. The only true realism is what I, or anyone else, can see with my own eyes. I can choose to believe or not the reality someone else saw and the way they present that reality to me and accept it as real but it's not truly real to me.
One may find some tonal inversions (or shifts at least) in human vision too, see the well-known checkerboard :
(http://www.popularscience.co.uk/features/checkershadow-AB.jpg)
If some people like the stylized HDR look with aggressive "detail enhancement" that's fine. Different people have different tastes; and when it comes to art anything goes, so I certainly don't think that a naturalistic approach is the only valid one. I can appreciate truly well-done stylized HDR, even if it's not to my personal taste. The problem is, it's extremely rare. The vast majority of stylized HDR imagery is full of ugly artifacts that I just can't see past, and it boggles my mind that so many people don't seem to mind the ugly halos, color shifts, etc. Hopefully over time the tools will get better and this will improve; but right now I would say that the "bad" HDR outweighs the good by at least 10:1. So for a lot of people, this pretty much spoils the whole genre.
Which was my original point regarding Uwe advocating and teaching HDR images that look surreal (which is how I respond rather than saying "stylized" which some how kinda lets people off the hook).
Compressing a high contrast scene into a printable dynamic range is indeed difficult. But it can be done without all the surreal downside. I would encourage people to actually learn how to do it so it isn't glaringly obvious. Which I'm not sure Uwe's tutorial does...
Actually I think you can argue mathematically that the heavily stylized HDR look tends to be less realistic. It's pretty common to have tonal inversions in these types of images, where for instance the shadowed foreground is actually brighter than the daytime sky, just to name one very common example. So I don't really think you can argue that folks think this stuff looks unnatural just because film came first. Maybe if the real world looked like the one in Avatar this argument might hold some water...There are errors in tonemapped images, yes. Do you think that blown-out highlights and clipped blacks are a part of what you normally see in a scene? So there are errors in regular images as well. I do not see any attempt at bringing out mathematical tools to support your statement?
JC: Who knows? And what difference would that make?I am supporting my claim that I expect art to be something different than taking a snapshot of reality. That was why you started this discussion with me in the first place, was it not?
If you are saying that "HDR is crap because it is not realistic", then you are saying that not being realistic makes it crap. If you at another stage claim that some other imagery is great even though it is not realistic, then you are not honest in your arguments.QuoteYou are mixing arguments here. If "lack of realism" is a valid argument against some art it should be a valid argument against all art.JC: Really? If an argument against one woman is valid, is that an argument against all women? Frankly, this suggestion makes no sense at all. I'd heap further ridicule on it, but that that would take too much time.
QuoteYou second statement statement seems irrelevant to what I said.JC: I would disagree.
It's not possible, Steve. The reason it's not possible is because every image is different. There are no 'set smoothing to 25, brightness to 50, saturation to 30, etc.' formulae for more realistic images. There's a learning curve involved. It's also not possible because every software app. is different. Time needs to be spent learning the software, how it works, what the various tonemapping operators do, how they work independently and how each affects the others in combination.
It is possible to make general statements about how different operators impact an image and within those general statements one can get an idea of where to start to get a more 'natural' result. But that's really as far as it can go. That's what I've done in my HDR tutorial. I also have three presets people can download and use that offer starting points for three different 'looks' - a slightly unreal, sort of graphic illustration look, a natural look and a hyper-grunge look.
All true and more. It's a little like re-lighting and re-taking the picture.
I compeletely agree with Bob here. If you are looking for a general formula that can be applied to make an image look natural, then you might as well just shoot in jpeg mode and let the camera apply its own built-in adjustments.
A number of different exposures which have been merged to HDR, becomes a single image which needs to be adjusted as any single RAW image needs to be adjusted during and after conversion. It's rare that an image can look exactly right with just a click on the 'auto' button in ACR. If it does, it will still need further adjustment in 'proof mode' before printing.
If the result doesn't look satisfactory, for whatever reason, then the photographer is to blame (or the person who processed the image). Don't blame the tool. Photoshop is an amazing tool for image adjustment.
A bit dissapointed to read this:Guillermo! I'm surprised to hear this from you, as a man who is as concerned about signal optimization.
It makes me think Steinmueller didn't really get that the point of bracketing for HDR will soon be unnecessary, and it does not participate in the definition of HDR itself. The only reason we have today for bracketing HDR scenes is that sensors are still too noisy to capture in a single shot the entire DR of many real world scenes.
I'm not sure everyone realizes that the HDR technique involves a move into the space of absolute magnitudes, and away from relative white-black point of a single capture. This is a conceptual shift. I think some here are carrying over the assumption that HDR is just another tool for doing LDR, but the conceptual shift is more significant.
This is an interesting thread and I would like to understand this potential conceptual shift.The HDR file is a kind of special case. At first glance, it's just a TIF file with 32 bits. But the data represent something other than pixels. Think of this as a dataset of measurements, out to a good number of decimal places.
Up til now I thought the HDR-space was just a much larger space than a LDR-space (and hence the need for tonemapping), but in case I am missing something important, is it possible to explain a bit about this space of absolute magnitudes compared to a realtive black-white point?
I put together a 'wish list' for Adobe on my blog and one of the things I wished for was the ability to selectively tonemap different areas of an image (without tonemapping multiple times and blending different tonemap versions after the fact) which would then really start to take us to the ability to relight a scene.
For HDR to show its full potential we need, at least, monitors that can display the entire brightness range so we can get a feel for what our true starting point is and where we want to take it from there.
that you'd have to think there are some pretty smart people doing the programming for these HDR applications so if they're having difficulty getting deghosting processes to work well, maybe it's a little more difficult than you want to make it out to be. If it's not, then perhaps you could create a software app. that's useable by people and solve everyone's problems. And make yourself wealthy in the process. ::)
PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology.
As I said, I'm talking real world shooting conditions, not in a lab on a bench test. When those types of images start to be available for for evaluation and comparison and when comparisons of those 'real' images to other cameras are done, then I'll start to believe the hype. Until then..... And that's two cameras. Others still don't make it that far.Sensor performance is the same in real world than in the lab, basically because labs are located in the real world. I have extensively used my Canon 350D in shooting interiors, and it never performed worse than when measuring its DR at the lab (if my room at home can be considered a lab). The 350D was an APS-C sized sensor camera launched in the beginning of 2005, having an effective DR of 8 stops.
I'm not going to get into a pissing contest with you, GL.You do well. Next time you decide to be ironic to someone, make sure you have the needed resources.
BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria)Guillermo,
You do well. Next time you decide to be ironic to someone, make sure you have the needed resources.
But if a monitor could display something like 15 or 16 stops of brightness, that would be far better than what we have now and would, I'd think, cover a (large) majority of the HDR images being created. create a software app. that's useable by people and solve everyone's problems. And make yourself wealthy in the process. ::)
The retina has a static contrast ratio of around 100:1 (about 6½ f-stops). As soon as the eye moves (saccades) it re-adjusts its exposure both chemically and geometrically by adjusting the iris which regulates the size of the pupil. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal chemistry (the Purkinje effect) are mostly complete in thirty minutes. Hence, a dynamic contrast ratio of about 1,000,000:1 (about 20 f-stops) is possible. The process is nonlinear and multifaceted, so an interruption by light merely starts the adaptation process over again.
Bob,If the camera-monitor reproduction chain was able to reproduce the original dynamic range of the scene, and the size/distance to the monitor was similar to the angle you would have observed if you were at the scene (or using binoculars mimicing the tele-lens used, if any), then the stimuli in the room should be similar to "being there". Of course, some real-life scenes have a DR that make them visually hurtfull or not pretty.
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.
My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?I think that plsmas have a fantastic DR due to being able to fully turn a pixel 'off'. I also think that they cannot reproduce very dark grays just above that "black" becuse they are pulse-modulated, and a brightness slightly above 'off' would be perceived as flickering.
I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?
I think maybe in order to appreciate the maximum dynamic range of one's display, the viewing room needs to be essentially a 'black box', ie all walls, floor and ceiling painted non-reflective matte black.
Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.
However, if the monitor were the size of a wall, the room would be so brightly lit that the shadows would appear like midtones.
My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?
I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?
PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology...(http://www.guillermoluijk.com/article/perfect/dxomark.gif)
Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.
This is a non HDR image developed in Lightroom: http://echophoto.smugmug.com/Special-methods/HDR/HDR/13306153_DcZHj#1002864735_dkeci
and this is a HDR image using Merge to HDR in PSCS5:
http://echophoto.smugmug.com/Special-methods/HDR/HDR/13306153_DcZHj#966794997_wt4h6
Best regards
Erik
Peter
..
If the camera-monitor reproduction chain was able to reproduce the original dynamic range of the scene, and the size/distance to the monitor was similar to the angle you would have observed if you were at the scene (or using binoculars mimicing the tele-lens used, if any), then the stimuli in the room should be similar to "being there". Of course, some real-life scenes have a DR that make them visually hurtfull or not pretty.
Would not a display technology that did not reflect anything from the room be enough?
In controlled environments, such as darkened rooms, or rooms where all light sources are diffused, glossy displays create more saturated colors, deeper blacks, brighter whites, and are sharper than matte displays. This is why supporters of glossy screens consider these types of displays more appropriate for viewing photographs and watching films. Also, in extremely bright conditions where no direct light is facing the screen, such as outdoors, glossy displays can become more readable than matte displays because they don't disperse the light around the screen (which would render a matte screen washed out).
Lightroom can do pretty decent job.
Guillermo,
how do you get that figure of 11 stops? From what I have read, that sensor has full well capacity of about 30,000e-, so 11 stops down is a signal of about 16e-, and then shot noise is 4e- RMS, limiting SNR to 4:1. Is that figure of 12dB (16:1) computed only with respect dark noise and read noise, not photon shot noise?
To put it another way, that target of 12dB or 16:1 SNR (which seems reasonable to me for tolerable shadow noise) requires a signal of at least 2^8=256 photons detected even if the noise generated within the camera is negligible, and to have that photon count 11 stops below maximum signal requires the ability to count up to 2^8*2^11=1^19 photons, a bit over 500,000. With well capacity of about 32K or 2^15, the limit is seven stops about that 12dB threshold.
P. S. [Added later] It just occurred to me that you might be using the strange "power referred" use of dB, so a factor of two in SNR is 6dB. Then the numbers are consistent, with 12dB meaning 4:1 SNR. But I am not sure how good a local SNR as low as 4:1 can look even in very dark parts of the displayed image.
Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.
Fill Light is pretty cool.
Recovery may leave room for improvement (http://imagingpro.wordpress.com/2008/12/03/expanding-the-dynamic-range-of-a-single-raw-file/).
Peter
--
Supposing we display that HDR image on a monitor which has a DR capability of 16 stops. What would be the purpose if the eye can only encompass a DR of something between 6 1/2 and 10 F stops? Get my point?The eye
The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly one billion apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand.[citation needed] What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as "black" can be shifted across six orders of magnitude—a factor of one million.
The eye takes approximately 20–30 minutes to fully adapt from bright sunlight to complete darkness and become ten thousand to one million times more sensitive than at full daylight. In this process, the eye's perception of color changes as well. However, it takes approximately five minutes for the eye to adapt to bright sunlight from darkness. This is due to cones obtaining more sensitivity when first entering the dark for the first five minutes but the rods take over after five or more minutes.[1]
I was simply striving for the simple goal of reproducing reality. If real scenes can have a large dynamic range, I would like for all that to be perfectly reproduced end-to-end. If we ever get there, we will see if it is worth it. I am certain that some scenes contain a large DR that I cannot reproduce using current non-HDR capture and display, but thatI can make sense of when "being there". This suggests to me the potential in a high-DR reproduction system.
The answer to my question is probably "You know nothing about how sensors work" but on the slight chance that this is wrong, may I ask this:I really feel that something /like this/ is coming, and that there are a whole class of dynamic capture methods that could be deployed, including things such as this.
Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. As long as you keep count of how many times the reset it done (say by setting a flag) you can calculate the exposure each photosite receives by adding the value of the last reset + the number of resets on your counter. This ought to be capable of dealing effectively with any subject brightness range. And we wouldn't have to worry about shadow noise because the sensor could easily handle 5 stops of overexposure!
Go on, tell me why this is impossible.
However CS5 has offered an impressive solution in HDR-2, with its 'Remove Ghosts' feature. This feature must be very useful for Psychics and Spiritualist Mediums who wish they could stop seeing ghosts. ;D
:-)
Here's a scene of the living room of a friend I'm visiting over the Christmas/New Year break, and crops of the processed HDR images, with and without ghost removal.
Ah, you don't have snow?!
Now I ask you, are these images surrealistic? Untidy, maybe! But surrealistic?... no!
I'm very surprised and very impressed with the ghost removal result in this particular example. In order to reduce the possibility of movement as much as possible, I used ISO 1600 for these shots. Exposures varied from 1/3000th to 1/10th, and the shadows are still noisy. At the base ISO of the D700, the maximum exposure would have been a full second, improving SNR in the dark parts significantly but probably causing too much blur for the 'remove ghosts' feature to handle.
Comments below,
Ah, you don't have snow?!
Erik
Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. ...I think it could happen. A few ideas similar to this are being tried, but I have only heard of them being used in some security camera sensors.
To reproduce reality you would need a 3-D monitor or 3-D print for a start.There may be several aspects of reality reproduction. I dont see why the lack of stereoscopy should be an argument against striving for realistic dynamic range.
However, the problems of insufficient dynamic range in the reproduction chain has already been solved for static subjects, using exposure bracketing.Bracketing only solves the capture problem, not the entire reproduction chain.
Having captured the scene with its full dynamic range, the problem is not the lack of a monitor which can display that full dynamic range, but the lack of skill and technique of image processing in order to compress that captured dynamic range to something that matches the compressed 'field of view' of the print or monitor, and the compressed dynamic range of the eye which is reduced to a 'more or less' fixed gaze when viewing that reproduction.You are assuming that the monitor covers only a small field of view of the viewer. I dont think that your assumption is generally true. I went to the movies yesterday, and the big screen covered a substantial part of my FOV.
If one compresses the field of view in the reproduction, as any monitor must do when displaying any scene taken with only a moderately wide lens, it is appropriate in the interests of realism to compress the dynamic range, because the eye, when viewing the reproduction, does not have the opportunity to dilate and contract to the same degree as it did when viewing the original scene.If that function is needed, it shoud be applied automatically, in the screen (as that is often the only component that has any idea about how large the viewer fov is. Large displays, projectors, or people sitting with their nose up against the monitor/paper should be able to cover close to 180 degrees of their viewer (with some artifacts).
The 5,000,000:1 contrast ratio of a modern plasma screen should be sufficient, even allowing for a little marketing hyperbole ;D .I am sceptic about all marketing.
There may be several aspects of reality reproduction. I dont see why the lack of stereoscopy should be an argument against striving for realistic dynamic range.
Bracketing only solves the capture problem, not the entire reproduction chain.
You are assuming that the monitor covers only a small field of view of the viewer. I dont think that your assumption is generally true. I went to the movies yesterday, and the big screen covered a substantial part of my FOV.
If that function is needed, it shoud be applied automatically, in the screen (as that is often the only component that has any idea about how large the viewer fov is. Large displays, projectors, or people sitting with their nose up against the monitor/paper should be able to cover close to 180 degrees of their viewer (with some artifacts).
Plasmas are usually limited to 2 megapixels. That may be an issue for critical applications if the image is to be seen very large.
The black point may be affeced by incident light. In other words, your room may have to be painted black to come nar the quoted DR.
Further, I believe that the maximum brightness is not all that much from plasmas, giving further problems with other light sources, and possibly issues if the absolute brightness of a scene have perceptual relevance.
I have been told that plasmas can produce very black blacks, but that there is a "hole" in the tonal range between the blackest level, and the next blackest. Supposedly this is connected to plasma inherently being PWM-devices of a limited switching speed, and turning a pixel "off" is easy, but turning it "nearly off" means having one bright cycle and many dark cycles, something that cause flickering. If they cannot produce a perceptually uniform gray scale from black to white, then all the DR in the world may not make them good for this application.
Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. Go on, tell me why this is impossible.