Luminous Landscape Forum

Equipment & Techniques => Digital Cameras & Shooting Techniques => Topic started by: BJL on December 25, 2010, 03:36:21 pm

Title: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: BJL on December 25, 2010, 03:36:21 pm
Given the recent discussions about dynamic range and handling scenes of large subject brightness range, some folks might be interested in a new series of articles by Uwe Steinmuller of Digital Outback Photo (http://www.outbackphoto.com), published at DPR. Here is part 1: http://www.dpreview.com/learn/?/Guides/The_art_of_HDR_Photography_part_1_01.htm
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 25, 2010, 11:19:27 pm
Given the recent discussions about dynamic range and handling scenes of large subject brightness range, some folks might be interested in a new series of articles by Uwe Steinmuller of Digital Outback Photo (http://www.outbackphoto.com), published at DPR. Here is part 1: http://www.dpreview.com/learn/?/Guides/The_art_of_HDR_Photography_part_1_01.htm


I read that, BJL, and mostly agree with what has been written so far, except perhaps a minor point about the responsiveness of the eye's pupil to changing levels of light, in the following extract.

Quote
Human vision works in quite a different way to our cameras. We all know that our eyes adapt to scenes; when it gets darker our pupils open, and when it gets brighter they close. This process often takes quite a while (it's not instant). It is said that our eyes can see a Dynamic Range of 10 f-stops (1:1024) without adapting the pupils and overall about 24 f-stops.

I think in most situations, the pupils respond to changing light conditions almost instantly (but not literally instantly, of course).

One can confirm this for oneself by gazing out of the window and focussing on a bright cloud, then suddenly shifting one's gaze to a dark corner of the living room which might reflect about -15EV less light, perhaps more. I'm just guessing. The pupil of the eye opens up so rapidly it would be impossible to time it with a stop watch.

An example of a situation where this process might take quite a while, would be when coming out of a darkened cinema into bright sunlight. It will then take a while for the eyes to adjust to the extreme change in the scale of the brightness range.

It is precisely because of the 'almost' instantaneous nature of the eye's aperture changes, that the limitations of the DR capability of all cameras is so obvious.


Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 26, 2010, 12:25:04 am
Given the recent discussions about dynamic range and handling scenes of large subject brightness range, some folks might be interested in a new series of articles by Uwe Steinmuller of Digital Outback Photo...

Yeah, ok...but ya know, if an image looks surreal, (as in an obviously condensed tonal range) I'm not sure that is particularly interesting (nor useful) for people who want a reasonably realistic representation of the original scene...

Most HDR type stuff looks phony...and while it may be trendy, it's not really all that desirable, is it? Really?

Just saying...

It's ok to look at something and say it looks like crap, if it is...
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JR on December 26, 2010, 12:45:18 am

It´s a matter of taste, right? I don´t like extreme HDR as seen in the article but a lot of folks do.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 26, 2010, 08:42:43 am
It´s a matter of taste, right? I don´t like extreme HDR as seen in the article but a lot of folks do.

It's not only a matter of taste but a matter of skill in image processing; that is, the ability to adjust the tonality and hues to taste so that the image looks natural. This is something I would have thought Jeff Schewe would have no trouble doing.

It can be a lot of work for those of us who are less skilled, which is why I would prefer a camera with as high a DR as possible to reduce the number of occasions when I might consider merging to HDR necessary.

In my opinion, the purpose of merging to HDR is not to create a surrealistic image but a natural image which is more representative of what the eye saw in the scene at the time the shots were taken.

The result should look like a processed ETTR shot, perhaps the middle exposure or the least exposure of 3 bracketed exposures, but with cleaner shadows as a result of the merger of the overexposed images.
 
Here's an example comparing an HDR merger of 3 exposures with the least exposure of the 3 bracketed shots, which has been processed separately. You can see that the deep shadows are much cleaner in the HDR image, as well as the moderate shadows. The HDR image is a higher quality image.

I haven't bothered to get the tonality and color hues exactly matching in both images. I would do a better job if I intended to print this image. As it stands, the deep shadows are not as clean as I'd like. I feel I should have taken at least one additional exposure 1 EV greater.

The lens, the Sigma 15-30 on the 5D, is also not good at the edges and corners, so I won't be spending too much time on this image. However, I was surprised that an additional 4 stops of dynamic range was still not quite enough, in my view, for this high contrast scene.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JR on December 26, 2010, 08:59:24 am
It's not only a matter of taste but a matter of skill in image processing
Here's an example

G´Day Ray!

Of course it is a matter of skills. Totally agree. Steinmuller and his wife are very good at this. However, I find some of their HDR images a little to extreme for my taste. They do not look real to me. But as I said, it is a matter of taste. I don´t agree with Schewe that this is crap. It´s like art, I don´t like baroque but I like impressionists like Claude Monet. It´s the same with HDR.

Nice examples you put up. Waiting to see some good examples from your D7000  ;)

- John
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: feppe on December 26, 2010, 09:03:08 am
Yeah, ok...but ya know, if an image looks surreal, (as in an obviously condensed tonal range) I'm not sure that is particularly interesting (nor useful) for people who want a reasonably realistic representation of the original scene...

Most HDR type stuff looks phony...and while it may be trendy, it's not really all that desirable, is it? Really?

Just saying...

It's ok to look at something and say it looks like crap, if it is...

I'm glad I'm not the only one... The very first photo (the arches) has all life squeezed out of it, and scrolling down to one of the originals straight out of the camera is much better. I'm not saying it couldn't benefit from HDR or related techniques (digital blending or exposure fusion), but it's clear that the article is not written with realistic results in mind. Perhaps not surprising since it is DPR, after all.

As a sidenote, I'm currently reading a guidebook for Madrid, and almost all of the photos are done in HDR or emulating the look. Ewww.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Bart_van_der_Wolf on December 26, 2010, 10:18:27 am
I'm glad I'm not the only one...

Hi Feppe,

In that case the both of you ;) are alone in assuming that HDR images, and more importantly the subsequent tonemapping, are "not interesting (nor useful) for people who want a reasonably realistic representation of the original scene...".

Sure, one can (very easily) produce crappy pictures using these techniques, but one can also achieve realistic results that cannot be achieved with other techniques (unless one does timeconsuming manual exposure blending/masking).

Quote
The very first photo (the arches) has all life squeezed out of it, and scrolling down to one of the originals straight out of the camera is much better. I'm not saying it couldn't benefit from HDR or related techniques (digital blending or exposure fusion), but it's clear that the article is not written with realistic results in mind.

I somewhat agree with your observation about "the life squeezed out of it", but I wouldn't confuse one person's processing preferences with the capabilities to produce vastly different (more to your liking) renderings of the same base images. Tonemapping is as much an Art as it requires technical skill.

Cheers,
Bart
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: stever on December 26, 2010, 02:37:33 pm
i agree with Bart - particularly the part about art.  I've read a number of articles and a couple books with descriptions of how to get "natural" results from Photomatix, seen a few examples that looked realistic, and and gotten a couple realistic results myself.  But it always seems to be pretty much trial and error.

There's an article with examples by Tom Till in the Feb Outdoor Photography mag directed towards realistic HDR landscapes with Photomatix and a set of guidelines which i haven't tried yet.  Unfortunately the examples don't look particularly realistic to me.

I believe it can be done, and would love to hear advice or recommended resources for predictable processing to acheive realistic HDR images.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 26, 2010, 06:18:02 pm
It's not only a matter of taste but a matter of skill in image processing; that is, the ability to adjust the tonality and hues to taste so that the image looks natural. This is something I would have thought Jeff Schewe would have no trouble doing.

Oh, I have no problem controlling scene dynamic range and yes I often blend multiple shots together to extend the limitations of the sensor.

But what I don't do is make images look surreal with obvious tonal manipulations and HDR telltales left in an image.

If it looks phony, it moves out of the photographic realm and into an illustration realm. Just cause modern tools make is "easy" to do something doesn't make it desirable. Actually, the same could be said for a lot of digital imaging techniques...just cause you CAN do it doesn't mean you SHOULD do it, ya know?
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 26, 2010, 07:46:49 pm
If it looks phony, it moves out of the photographic realm and into an illustration realm. Just cause modern tools make is "easy" to do something doesn't make it desirable. Actually, the same could be said for a lot of digital imaging techniques...just cause you CAN do it doesn't mean you SHOULD do it, ya know?

I agree that because you can do it doesn't mean that you should. However, to give people like Uwe Steinmuller the benefit of the doubt, I think it's likely that sometimes the photographer may just be demonstrating what's possible with regard to increased DR and lower shadow noise, in the clearest and most obvious manner so all can see, even the untrained eye.

I think it's also the case that some folks are simply not sufficiently well-practiced in the use of the tone-mapping sliders and other controls in programs like Photomatix and Photoshop's Merge to HDR.

I recall the first time Adobe introduced the Shadows/Highlight tool (did that first appear in CS1?), I got some pretty awful results until I experimented a lot with the radius, tonal width, and amount. There are a large number of different combinations just in those 3 sliders.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 26, 2010, 07:55:30 pm
G´Day Ray!

Of course it is a matter of skills. Totally agree. Steinmuller and his wife are very good at this. However, I find some of their HDR images a little to extreme for my taste. They do not look real to me. But as I said, it is a matter of taste. I don´t agree with Schewe that this is crap. It´s like art, I don´t like baroque but I like impressionists like Claude Monet. It´s the same with HDR.

Nice examples you put up. Waiting to see some good examples from your D7000  ;)

- John




G'Day John,

I do most of my photography during travels to exotic locations, probably because, when I'm home, I find there are so many chores to attend to, such as slashing grass, mixing concrete, doing home rennovations and improvements, processing some of the hundreds of thousands of images I have stored away on DVD discs and external hard drives, and of course arguing with various people on the internet, who appear to have some wrong ideas on certain topics.  ;D

As Bart would agree, it's better to be prepared beforehand and know what sort of performance your equipment is capable of. From such images I've posted above, in conjunction with an analysis of the comparative test results at DXOMark, I can deduce that the D7000 with a single, correct ETTR shot, will produce shadows as clean as the HDR merger of 3 exposures +/- 2EV, taken with the 5D.

The middle of those 3 exposures was very slightly overexposed, by maybe 1/3rd of a stop. The longest exposure is thus a 2.33 EV greater exposure than an ETTR.

At normalised print sizes, the D7000 has 2.74 EV greater DR than the 5D, according to DXOMark.

Since there are no other image-quality parameters in which the 5D is even marginally better, despite it being full-frame, (such as color sensitivity, or SNR at 18% grey), I can declare that my 5D is now truly, completely and totally redundant. (Anyone like to buy it? Have I done a good job at promotion?  ;D )

Actually, just joking. I probably want to hang onto it for sentimental reasons.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 26, 2010, 07:58:41 pm
I agree that because you can do it doesn't mean that you should. However, to give people like Uwe Steinmuller the benefit of the doubt, I think it's likely that sometimes the photographer may just be demonstrating what's possible with regard to increased DR and lower shadow noise, in the clearest and most obvious manner so all can see, even the untrained eye.

I know Uwe and it's quite possible he used the opening image in his article as an example, but I saw it as an example of that you SHOULD NOT do not what you would WANT to do. It's a fine example of taking a scene and ruining it by using HDR.

Whether you use ACR's Fill Light and highlight Recovery or Photoshop's Shadows/Highlights tool or blending by HDR or even blending a little bit of a lower exposure in the highlights of a lighter exposure, if it sucks, it sucks, ya know? I don't advocate promoting suboptimal results regardless of who you are.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 26, 2010, 08:37:04 pm
......... but I saw it as an example of that you SHOULD NOT do not what you would WANT to do.

I can't agree with that, Jeff. I always try to do what I WANT to do. I've never considered myself as a conformist. I would have thought you also don't consider yourself as a conformist.   ;D
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 26, 2010, 10:12:11 pm
I can't agree with that, Jeff.

So, you think Uwe's opening shot with the arches is a shining example of a good use of HDR? Hum...we looking at the same image bud?
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JeffKohn on December 26, 2010, 11:58:17 pm
So, you think Uwe's opening shot with the arches is a shining example of a good use of HDR? Hum...we looking at the same image bud?
The way it's captioned, it doesn't look like he meant it to be an example of bad HDR.  If that was the intention, I think it was poorly communicated. Frankly, I'm no longer surprised when I see articles about getting natural results with HDR tonemapping in which the examples look bad. It seems to be the norm, rather than the exception.

It's funny, Tom Till has an article in the most recent issue of Outdoor Photographer about his approach to 'natural' HDR and what a revelation this workflow is after decades of struggling with the limitations of film and grad filters. I personally think he should go back to film, because the HDR photos illustrating the article are terrible.

What a lot of these HDR proponents don't seem to get is that using HDR to reduce the dynamic range between shadows and highlights is not going to change the fact that crappy light is crappy light. The problem with shooting midday with the sun directly overhead is not that our cameras can't handle the dynamic range; the problem is that such light is ugly. It's ugly even to our eyes, and HDR isn't going to fix change that. So while there are times when HDR or other exposure blending techniques can be useful, the simple fact is that HDR cannot save an image shot in crappy light, no matter how much one twiddles with the sliders in Photomatix.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 27, 2010, 12:22:44 am
So, you think Uwe's opening shot with the arches is a shining example of a good use of HDR? Hum...we looking at the same image bud?


No. I'm just defending a person's right to produce whatever type of image he wants, irrespective of certain peoples' opinions of its merit.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 27, 2010, 12:28:33 am
It's ugly even to our eyes, and HDR isn't going to fix change that. So while there are times when HDR or other exposure blending techniques can be useful, the simple fact is that HDR cannot save an image shot in crappy light, no matter how much one twiddles with the sliders in Photomatix.

Crappy light is light that has been reflected from crap. I think most people would argue they are not in the habit of photographing crap.

Making the most from the available light is part of the art and craft of photography.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 27, 2010, 12:31:26 am
No. I'm just defending a person's right to produce whatever type of image he wants, irrespective of certain peoples' opinions of its merit.

So you are ok with somebody advocating crap, right?

I just want to be perfectly clear here...you think Uwe is doing a public service by teaching people to take a crap image and process it via HDR to get an HDR piece of crap image, right?

I'm fine with people making whatever imagery they want to make in the privacy of their own artwork.

But, I have a problem when somebody touts themselves as some sort of expert and advocates an approach to photography that produces imagery that is, substantially less than useful...or furthers a process that is far more complex and difficult to do well that some tutorial on the web seems to indicate. It takes talent and effort to do a proper tonemapping that doesn't look phony.

Come on, truth be told...do you honestly think his tutorial is really useful or are you simply trying to find some sort of point to argue with me?
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 27, 2010, 09:34:16 am
So you are ok with somebody advocating crap, right?

I just want to be perfectly clear here...you think Uwe is doing a public service by teaching people to take a crap image and process it via HDR to get an HDR piece of crap image, right?

I'm fine with people making whatever imagery they want to make in the privacy of their own artwork.

But, I have a problem when somebody touts themselves as some sort of expert and advocates an approach to photography that produces imagery that is, substantially less than useful...or furthers a process that is far more complex and difficult to do well that some tutorial on the web seems to indicate. It takes talent and effort to do a proper tonemapping that doesn't look phony.

Come on, truth be told...do you honestly think his tutorial is really useful or are you simply trying to find some sort of point to argue with me?


Hey! Jeff,
This is only part one. Part II may be about, 'How to avoid, or fix, crap results when merging to HDR'. Patience, please!  ;D

I take it you are referring to the rather uninteresting image of Fort Point Arcades. Since I'm not American, this image has no particular resonance for me, and therefore I would have no reason to hang it on my wall.

However, if it were my image and if it were to have some special meaning for me, I would do some more work on it in Photoshop.

I notice that Uwe makes the following comment under that final tone-mapped image:

Quote
In this version the highlights show detail, the shadows are not blocked and the flatness is gone. This would be not our final version. We usually optimize the photo in Photoshop CS5:

Did you miss that?
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: feppe on December 27, 2010, 12:10:39 pm
In that case the both of you ;) are alone in assuming that HDR images, and more importantly the subsequent tonemapping, are "not interesting (nor useful) for people who want a reasonably realistic representation of the original scene...".

Sure, one can (very easily) produce crappy pictures using these techniques, but one can also achieve realistic results that cannot be achieved with other techniques (unless one does timeconsuming manual exposure blending/masking).

I somewhat agree with your observation about "the life squeezed out of it", but I wouldn't confuse one person's processing preferences with the capabilities to produce vastly different (more to your liking) renderings of the same base images. Tonemapping is as much an Art as it requires technical skill.

I'm in full agreement with you. I didn't say HDR can't be used to produce realistic results, and in fact implied that it can. I've seen realistic results from Photomatix, Oloneo or others, results which I have been unable to replicate with my own images, or which have been inferior to manual blending - although in fairness I have only spend several hours on a few "proper" HDR programs over the years.

I get much better, realistic and consistent results with manual blending and the occasional Tufuse Pro blended layer added in for the tough cases.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: PierreVandevenne on December 27, 2010, 03:34:35 pm
So you are ok with somebody advocating crap, right?

Given the highly subjective nature of crappiness, this is probably not a valid angle of attack. No photography website is immune to posting crappy pictures from time to time... And anyway this is only an introductory tutorial,  not an opinion piece claiming to be the final word on the only correct method for HDR.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: BJL on December 27, 2010, 04:32:06 pm
Yeah, ok...but ya know, if an image looks surreal, (as in an obviously condensed tonal range) I'm not sure that is particularly interesting (nor useful) for people who want a reasonably realistic representation of the original scene...

Most HDR type stuff looks phony...and while it may be trendy, it's not really all that desirable, is it? Really?
I agree that there is a lot of artistically worthless "because I can" HDR stuff out there. And for my tastes, Uwe has bought the highlights down a bit too much in his examples --- though perhaps for illustrative rather than artistic purposes.

But darkroom printers have been making judicious use of low contrast printing papers and dodging and burning to get a "convincing" or artistically satisfying even though far from literally accurate representation of a scene with high subject brightness range. So I think there is some use for "tonal compression", if done with good judgement. In-camera auto-HDR, not so much.


P.S. I should have read the whole thread before posting the above, since it had pretty much all been said. Sorry for the waste of bandwidth.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: DaveCurtis on December 27, 2010, 04:43:17 pm
The majority of Uwe's HDR work seems to have that "fake" look. And I must say, I am no great fan.

However I was rather impressed with an HDR article here on LL by Alexandre Buisse.

http://www.luminous-landscape.com/essays/hdr-plea.shtml

I struggle to tell which shots of Alexandres are HDR. Well done!
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 27, 2010, 06:30:54 pm
However I was rather impressed with an HDR article here on LL by Alexandre Buisse.

I agree...that article DOES help advance the understanding of "HDR" and when it's useful without making any of the images look like "HDR" images. Controlling the scene contrast range by either stacking exposures or using other post processing techniques is pretty fundamental to digital imaging and yes, in the old days we were limited by exposure/processing and paper contrast or grad filters...it's nice to see good examples of digitally processed images to control the scene contrast range. The only thing missing was how he did them :~)
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: stever on December 27, 2010, 07:16:46 pm
Since there are exmaples that HDR can make natural images and all(?) the published descriptions don't seem to work does that mean the "secret" method is too valuable to share--  or are natural images just a result of trial and error?
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Bart_van_der_Wolf on December 27, 2010, 08:19:53 pm
Since there are exmaples that HDR can make natural images and all(?) the published descriptions don't seem to work does that mean the "secret" method is too valuable to share--  or are natural images just a result of trial and error?

Have you tried some of the suggestions? My suggestion is to try SNS-HDR (http://www.sns-hdr.com/) (Windows only), but there are other high potentials. And yes, there is some trial and error involved, AKA learning curve.

Cheers,
Bart
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 27, 2010, 11:35:29 pm
On my way home to my bucolic paradise, after attending Christmas celebrations in Brisbane yesterday, I was stopped by floods. The road was impassable and I had to return to Brisbane.

On the return journey, it was getting late and at one point I noticed the setting sun reflecting  on a flooded field. I immediately stopped the car at the side of the highway and took the following hand-held shot with my D7000 (24-120 at 50mm, ISO 100, F4 and 1/80th sec with a -1 EC adjustment in ACR 6.3).

No HDR required.  ;D
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 28, 2010, 01:16:30 am
For those who would like a wider field of view, here's a 24mm shot of the same scene, same exposure and similar processing.

Again, no HDR required  ;D .
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: daws on December 28, 2010, 02:17:51 am
The majority of Uwe's HDR work seems to have that "fake" look. And I must say, I am no great fan.

However I was rather impressed with an HDR article here on LL by Alexandre Buisse.

http://www.luminous-landscape.com/essays/hdr-plea.shtml

I struggle to tell which shots of Alexandres are HDR. Well done!

Meaning no disrespect to Buisse or his article, his "Reflections of Ben Nevis on Midway Loch, Scotland" and "Abandoned truck in the Uyuni salt flat of Bolivia" looked quite HDR-fakey to me.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 28, 2010, 02:26:31 am
How about a 70mm shot! It's often better to get closer to the subject.

Phwoar! Look at at that! I certainly don't need HDR. Eh? What?  ;D
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JR on December 28, 2010, 02:56:36 am

Ray,

They are very saturated. Did you crank up saturation or did you just happen to have a colorful evening?  ;)    The scene brightness is certainly very high though.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 28, 2010, 04:10:10 am
Ray,

They are very saturated. Did you crank up saturation or did you just happen to have a colorful evening?  ;)    The scene brightness is certainly very high though.

The last one I'm most pleased with. That's exactly how it was. Spectacular! The first two I was just practicing.  ;D
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: LKaven on December 28, 2010, 06:53:05 am
Ray, you could have gotten a few things out of using HDR in those images.

By supersampling the image into 32-bit space, you could have increased the fidelity on the low tones after tonemapping, and sculpted the shoulder on the highlights, perhaps to be a bit more like slide film.  I think both of these would have been extremely beneficial.  There was a lot of good detail in the trees that would have been less vague, and you could have preserved a bit of the gradation in the sunset.  You could have done all this without intrusive artifacts. 
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 28, 2010, 06:47:39 pm
Ray, you could have gotten a few things out of using HDR in those images.

By supersampling the image into 32-bit space, you could have increased the fidelity on the low tones after tonemapping, and sculpted the shoulder on the highlights, perhaps to be a bit more like slide film.  I think both of these would have been extremely beneficial.  There was a lot of good detail in the trees that would have been less vague, and you could have preserved a bit of the gradation in the sunset.  You could have done all this without intrusive artifacts. 
 

Good point! But such a comparison will have to wait till another occasion. Unfortunately I wasn't carrying a tripod. I'm not totally familiar with my new camera and new lens yet, but I have found that the remote cord for my D700 does not fit my D7000, which is a nuisance. The remote cord I bought for my Canon 60D about 6 or 7 years ago fits all subsequent Canon models, so I'm annoyed I have to carry two remote cords when I carry two Nikon cameras and tripod.

The shots were taken at full aperture of F4, so are probly not as sharp as they could have been. The last one was 1/40th sec at 70mm, or 105mm full-frame equivalent. Even with VR, slower than 1/40th is a bit risky at that focal length.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: PierreVandevenne on December 28, 2010, 07:34:25 pm
Great shot anyway Ray!
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 28, 2010, 10:33:05 pm
Great shot anyway Ray!

Thanks, Pierre. I think a little dramatic license is sometimes in order  ;D .
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: John Camp on December 29, 2010, 02:11:52 am
Hmm, when Schewe used the word surreal in his first post, I thought he was using the word loosely, as most people do. But Uwe's first image is a lot like the surrealist painter Georgio de Chirico's painting of similar scenes...It had never occurred to me that de Chirico was an HDR painter, but he was. 8-)

JC

Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on December 29, 2010, 09:00:52 am
Hmm, when Schewe used the word surreal in his first post, I thought he was using the word loosely, as most people do. But Uwe's first image is a lot like the surrealist painter Georgio de Chirico's painting of similar scenes...It had never occurred to me that de Chirico was an HDR painter, but he was. 8-)

JC
I believe that some painters used color hue/saturation to overcome the dynamic range limitations of the medium? The work of painters is perhaps an important clue to how real scenes should be mapped to limited media in a way that happens to agree with human taste.

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on December 29, 2010, 09:08:12 am
Since there are exmaples that HDR can make natural images and all(?) the published descriptions don't seem to work does that mean the "secret" method is too valuable to share--  or are natural images just a result of trial and error?

No, not hardly "all" the published descriptions 'don't work' to create realistic images.  As Bart says, it does involve a learning curve and some trial and error.  But once you understand the software, understand how to bracket properly (a HUGE aspect of the task that isn't really well understood) and that the tonemapped LDR image really may just be the starting point rather than an end point, then you begin to develop a comfort level, a workflow and can generate repeatable results.  

I think it's a little unfair that HDR gets such a bad rap when people are Topazing the shit out of images and no one seems to bat an eye.  Or when guys like Dave Hill develop a processing methodology that creates anything but a realistic look and people fawn over it.  Doesn't make a lot of sense.  What it does do; however, is provide further proof that the appreciation of 'art' and what is or isn't 'art' is entirely subjective.  
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on December 29, 2010, 09:17:18 am
I think it's a little unfair that HDR gets such a bad rap when people are Topazing the shit out of images and no one seems to bat an eye.  Or when guys like Dave Hill develop a processing methodology that creates anything but a realistic look and people fawn over it.  Doesn't make a lot of sense.  What it does do; however, is provide further proof that the appreciation of 'art' and what is or isn't 'art' is entirely subjective.  
For one, I would expect 'art' to be something different, or more than 'a realistic snap of a scene'. B&W? Filmgrain? non-linear response of film? Eerie long exposures of waves that does not map to anything that I can see with my bare eyes?

If anything 'non-realistic' is bad, then a lot of photography is bad. If some non-realistic photography is good, then no photography should be dismissed for being non-realistic. Perhaps for being 'to radical compared to what we are culturally used to', or 'too easy to accomplish for casual users and therefore not worthwhile', or simply 'not according to my taste'.

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JimAscher on December 29, 2010, 11:44:04 am
For one, I would expect 'art' to be something different, or more than 'a realistic snap of a scene'. B&W? Filmgrain? non-linear response of film? Eerie long exposures of waves that does not map to anything that I can see with my bare eyes?

If anything 'non-realistic' is bad, then a lot of photography is bad. If some non-realistic photography is good, then no photography should be dismissed for being non-realistic. Perhaps for being 'to radical compared to what we are culturally used to', or 'too easy to accomplish for casual users and therefore not worthwhile', or simply 'not according to my taste'.

-h

You have managed to sneak in, possibly "under the radar," a profound reminder on the subject of photography as art, which is regrettably somewhat rare in this forum.  Many thanks. 
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: John Camp on December 29, 2010, 06:50:29 pm
For one, I would expect 'art' to be something different, or more than 'a realistic snap of a scene'. B&W? Filmgrain? non-linear response of film? Eerie long exposures of waves that does not map to anything that I can see with my bare eyes?

If anything 'non-realistic' is bad, then a lot of photography is bad. If some non-realistic photography is good, then no photography should be dismissed for being non-realistic. Perhaps for being 'to radical compared to what we are culturally used to', or 'too easy to accomplish for casual users and therefore not worthwhile', or simply 'not according to my taste'.

-h

I think you're wrong on almost all of this.

Why should art be something different than you can see with your eyes? Jeff Wall takes high-resolution, very naturalistic photos of scenes that he creates much as a movie director does, but what you get in the photo is exactly what was in front of the camera. The art is in the creation, not in what the camera does.

Nobody said everything non-realistic is bad. You've set yp and knocked down a straw man. If some non-realistic photography is good, you can still dismiss other non-realistic photography as bad, even for no other reason than it's non-realistic. The question is, does the work succeed in its own terms? If somebody says, "We used HDR to increase realism in this photo," and they didn't increase realism, then they failed in their own terms. Sometimes, that's hard to tell, but it usually isn't.

As a general proposition, I'd suggest that any sweeping statements about art, such as yours, are wrong.
Title: About bracketing for HDR and output devices DR
Post by: Guillermo Luijk on December 29, 2010, 08:10:50 pm
A bit dissapointed to read this:

Quote
"If our cameras could capture high dynamic range scenes in a single shot we wouldn't need the techniques described in these articles."

It makes me think Steinmueller didn't really get that the point of bracketing for HDR will soon be unnecessary, and it does not participate in the definition of HDR itself. The only reason we have today for bracketing HDR scenes is that sensors are still too noisy to capture in a single shot the entire DR of many real world scenes.

But eventually (and this is already close to happen, just take a look at the amazing DR of most recent APS-C sized sensors used in the Pentax K5 and Nikon D7000), we won't need to bracket. Will that day be the end of HDR? not at all. The HDR problem will remain because HDR is about tone mapping the captured HDR onto LDR devices such as the monitor or the print.

So the Photomatix develop team can stay happy, people will go on using their software in the future. The only difference will be that today's bracketed input files will become a single input file with all the information on it. Easier for users as well: bye bye to all those alignment and ghosting issues in non-static scenes.

BTW from my experience I think Steinmueller's DR figures for the output devices are too optimistic:

Quote
"Today's Monitors: 1:300-1:1000 -> 8,2-10 stops
HDR monitors 1:30000 (watch your eyes, may get stressed) -> 14,9 stops
Printers on glossy media: about 1:200 -> 7,6 stops
Printers on matte fine art papers: below 1:100 -> 6,6 stops
"

I have measured real DR in normal observation conditions (i.e. ambient lighting) and my HP LP2475W monitor yielded 6,7 stops (http://www.guillermoluijk.com/quickwin/mpdrange/monitor.gif) (vs 8,2-10), and a printed copy on Fujifilm glossy paper yielded 4,3 stops (http://www.guillermoluijk.com/quickwin/mpdrange/papel.gif) (vs 6,6-7,6). I'd love to find out what an HDR monitor looks like!.

Regards

Title: Re: About bracketing for HDR and output devices DR
Post by: LKaven on December 30, 2010, 03:13:39 am
A bit dissapointed to read this:

It makes me think Steinmueller didn't really get that the point of bracketing for HDR will soon be unnecessary, and it does not participate in the definition of HDR itself. The only reason we have today for bracketing HDR scenes is that sensors are still too noisy to capture in a single shot the entire DR of many real world scenes.
Surprised to read this from you!  One very good reason for bracketing exposures is because of the properties of supersampling a scene into 32-bit space.  Just the increase in fidelity on the lowest tones is worth the effort.  Certainly, the newer cameras will be quite accurate at quantizing the lowest tones in a scene into 2-3 bit quantities, but only with supersampling do you stand a chance of increasing the resolution of those tones.  Of course, if one is shooting digital as though it were slide film, this might matter less.  But to the rest of us, it matters.

On the other hand, I think you are very much right to identify tonemapping as something in its own realm, independent from HDR. 
Title: Re: About bracketing for HDR and output devices DR
Post by: NikoJorj on December 30, 2010, 04:39:27 am
One very good reason for bracketing exposures is because of the properties of supersampling a scene into 32-bit space.  Just the increase in fidelity on the lowest tones is worth the effort.
Well, did you read and see this (http://www.luminous-landscape.com/forum/index.php?topic=49200.msg409770#msg409770)? Seems that 16bits already allow a fair amount of margin.
Title: Re: About bracketing for HDR and output devices DR
Post by: RFPhotography on December 30, 2010, 08:15:32 am

BTW from my experience I think Steinmueller's DR figures for the output devices are too optimistic:

I have measured real DR in normal observation conditions (i.e. ambient lighting) and my HP LP2475W monitor yielded 6,7 stops (http://www.guillermoluijk.com/quickwin/mpdrange/monitor.gif) (vs 8,2-10), and a printed copy on Fujifilm glossy paper yielded 4,3 stops (http://www.guillermoluijk.com/quickwin/mpdrange/papel.gif) (vs 6,6-7,6). I'd love to find out what an HDR monitor looks like!.

Regards



Steinmuller is using the theoretical max. based on the simple log equation converting a contrast ratio into a certain number of stops of light.  The log base 2 of 100 is 6.64.  Of 200 is 7.64.  In practical terms, the result is likely to be different.  But yes, I'd agree that his contrast ratios are too optimistic.  Paper prints don't carry nearly that much contrast.

John, you've made the same mistake as many others.  You've done it with respect to art in general as opposed to the ones who address HDR specifically.  You've imparted your objective position onto a subjective subject.  And that is what's wrong.  And that's not a subjective issue.  HJ has suggested what he 'expects' art to be.  An expectation isn't a hard and fast, objective construct.  Anything that captures or freezes a moment in time isn't realistic.  If I can't go to that place and see exactly what is in that photo or painting or movie or drawing or 3D rendering then it's not realistic.  The only true realism is what I, or anyone else, can see with my own eyes.  I can choose to believe or not the reality someone else saw and the way they present that reality to me and accept it as real but it's not truly real to me. 
Title: Re: About bracketing for HDR and output devices DR
Post by: Ray on December 30, 2010, 09:42:12 am

John, you've made the same mistake as many others.  You've done it with respect to art in general as opposed to the ones who address HDR specifically.  You've imparted your objective position onto a subjective subject.  And that is what's wrong.  And that's not a subjective issue.  HJ has suggested what he 'expects' art to be.  An expectation isn't a hard and fast, objective construct.  Anything that captures or freezes a moment in time isn't realistic.  If I can't go to that place and see exactly what is in that photo or painting or movie or drawing or 3D rendering then it's not realistic.  The only true realism is what I, or anyone else, can see with my own eyes.  I can choose to believe or not the reality someone else saw and the way they present that reality to me and accept it as real but it's not truly real to me. 

I'm getting a sense this argument is becoming convoluted and confused.

The problem as I see it is both the eye and the camera have limited dynamic range. The eye has the disadvantage of a very narrow FoV (excluding peripheral vision which detects only movement), but has the advantage of an easily executed rapid change of direction of view, combined with a continuously changing aperture to accommodate changing brightness levels in whatever scene is being viewed.

Without bracketing of exposures, do we miraculously expect the camera with its fixed aperture to faithfully capture a scene from the brightest part of the sky to the darkest shadows, shadows which are in fact, from the eye's perspective, not dark at all, because the eye's aperture in a fraction of a second has changed from F8 to F2.8, as its gaze is directed at such darker areas of the scene?

It seems to me to be a tradition in photography and many styles of paintings (I'm thinking here of Caravaggio) to unnaturally darken parts of an image for artistic impact. Black shadows create a sense of  'pop'. They also have the effect of removing distracting elements in the image, similar to the effect of a shallow DoF.

If you want to create a piece of art which does not represent what the 'average' eye saw, but which represents a whole lot of cultural ideas, personal preferences and idiosyncracies, then almost anything goes, depending on which authoritative figure endorses the work.

As I see it, the purpose and goal of merging different exposures to HDR is to mimic how the eye behaves as it views a scene, in order to reproduce a composite (merged) image which includes all the detail in the scene which the eye would have witnessed.

Compressing that wide dynamic range to fit naturally on a medium such as monitor or print is the problem.

It's a problem that requires skill in image processing, as well as sophistication of software.
Title: Re: About bracketing for HDR and output devices DR
Post by: Guillermo Luijk on December 30, 2010, 10:39:02 am
Surprised to read this from you!  One very good reason for bracketing exposures is because of the properties of supersampling a scene into 32-bit space.
Bracketing will always mean an advantage in minimising noise and having greater tonal richness (BTW no need of 32-bit floating point formats for that, a 16-bit integer with gamma can encode 99,99% real world HDR scenes. Try to download this TIFF file: superhdr.tif (http://www.guillermoluijk.com/download/superhdr.tif) that can be pushed 12EV without noise or posterization).

What I mean is that with sensor technology becoming lower and lower in noise, the advantages of bracketing will eventually vanish compared to the advantages of not having to bracket (no misalignment, no ghosting issues in moving scenes, no unnecesary resources wasted to store and process several RAW files, no tripod needed in some cases,...). So with the new cameras coming, users will start to take a single shot in scenes that they are bracketing today, in the same way as today nobody takes the mess to bracket low or medium DR scenes (< 8-9 stops).

But that day will not mean the end of HDR techniques, they will still be necessary because...


Compressing that wide dynamic range to fit naturally on a medium such as monitor or print is the problem.

It's a problem that requires skill in image processing, as well as sophistication of software.
I would add this is a problem that will never have a 100% satisfactory solution, just different approaches closer to the ideal goal, and always subject to the user's subjective opinion.

Regards
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on December 30, 2010, 11:18:25 am
I think you're wrong on almost all of this.

Why should art be something different than you can see with your eyes? Jeff Wall takes high-resolution, very naturalistic photos of scenes that he creates much as a movie director does, but what you get in the photo is exactly what was in front of the camera. The art is in the creation, not in what the camera does.
I dont know him. Does his images look like they were taken at any random place at any random time, or does it look like he has carefully chose time, place and camera settings to make a visually pleasing image?

If he in any way is "putting his soul" into his image, I would say that that could detract from the realism but add to the artistic value.

BTW, do you think that art should be valued from the end-result alone or does knowledge of the process add/subtract to its value? If I show you an amazing image that blows your socks off (purely hypothetically speaking), would you be any less impressed if I told you I had made it purely in Photoshop? Or is the ideal that one should wait for weeks in a cold, deserted place waiting for "just the right light" and then capture that magic moment right before the batteries run out and being tragically eaten by a bear?

Quote
Nobody said everything non-realistic is bad. You've set yp and knocked down a straw man. If some non-realistic photography is good, you can still dismiss other non-realistic photography as bad, even for no other reason than it's non-realistic. The question is, does the work succeed in its own terms? If somebody says, "We used HDR to increase realism in this photo," and they didn't increase realism, then they failed in their own terms. Sometimes, that's hard to tell, but it usually isn't.
You are mixing arguments here. If "lack of realism" is a valid argument against some art it should be a valid argument against all art. If the critique is that it "is not suceeding in its own terms", then that it the argument that you should use.

You second statement statement seems irrelevant to what I said.

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JimAscher on December 30, 2010, 11:22:34 am
This is a great, and quite often profound, discussion, about art (among other things).  Keep it coming.
Title: Re: About bracketing for HDR and output devices DR
Post by: hjulenissen on December 30, 2010, 12:05:00 pm
I would add this is a problem that will never have a 100% satisfactory solution, just different approaches closer to the ideal goal, and always subject to the user's subjective opinion.
Just like camera sensor DR is being improved, I believe that display DR is being worked on. I dont know about paper.

Using clever zoned backlighting, the impression of DR of LCD panels can be greatly improved. It might not matter if two neighbor pixels are 1000:1 or 100000:1 apart in brightness, but it may matter if larger areas are.

I really like the discussion on how our eye "scans" the scene, and adaptively adjust gain along the way to form a mental image that contains details in both darker and brighter parts. Reproducing the scene accurately would of course suffice, but what is more "natural" when you are limited to 100:1 or 1000:1 contrast is not evident. Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.

-h
Title: Re: About bracketing for HDR and output devices DR
Post by: LKaven on December 30, 2010, 12:46:09 pm
Bracketing will always mean an advantage in minimising noise and having greater tonal richness (BTW no need of 32-bit floating point formats for that, a 16-bit integer with gamma can encode 99,99% real world HDR scenes. Try to download this TIFF file: superhdr.tif (http://www.guillermoluijk.com/download/superhdr.tif) that can be pushed 12EV without noise or posterization).

The benefits of encoding in a 32-bit floating point space might not be so keenly felt in the higher tones.  But consider the 2-3 bit quantization for the lower tones in a single shot capture.  The color palette collapses into dither as you go lower.  But if you bracket and move to HDR space, you can expand that palette for purposes of post processing, and then decide where you want to map it on the tonal scale without significant loss of fidelity.

I'm not sure everyone realizes that the HDR technique involves a move into the space of absolute magnitudes, and away from relative white-black point of a single capture.  This is a conceptual shift.  I think some here are carrying over the assumption that HDR is just another tool for doing LDR, but the conceptual shift is more significant.
Title: Re: About bracketing for HDR and output devices DR
Post by: LKaven on December 30, 2010, 12:47:28 pm
Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.

Right on the mark!  If not "heavy" tonemapping, then "sophisticated" tonemapping.
Title: Re: About bracketing for HDR and output devices DR
Post by: Guillermo Luijk on December 30, 2010, 12:54:34 pm
Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.

This is an interesting point. I would never admit that a heavily tone mapped HDR image is closer to what my eyes perceive than a softly tone mapped image, lacking in local contrast and obtained with a simple S-shaped tonal curve, maybe with some black/highlight clipping. But perhaps this is because I have spent some 25 years looking at printed images representing HDR scenes that unavoidably became compressed to those poor 4 stops the print offers, and I have assimilated that to be the most natural look.

What is true and will always be, is that any output device with effective DR capabilities below the DR of the original scene, will never manage to make use perceive exactly what we perceived when looking at the real scene. We can only try to mimic what we perceived.

Regards
Title: Re: About bracketing for HDR and output devices DR
Post by: Guillermo Luijk on December 30, 2010, 01:03:42 pm
The benefits of encoding in a 32-bit floating point space might not be so keenly felt in the higher tones.  But consider the 2-3 bit quantization for the lower tones in a single shot capture.  The color palette collapses into dither as you go lower.  But if you bracket and move to HDR space, you can expand that palette for purposes of post processing, and then decide where you want to map it on the tonal scale without significant loss of fidelity.

There is absolutely no need to expand anything beyond the scene's DR. So as long as your camera can capture the DR of the scene in a single shot (and this just means acceptable number of levels and acceptable SNR for the deep shadows), there is no benefit for bracketing.

Consider the following example: a typical {0, +2, +4} bracketing from a 12-bit & 8-stops DR camera (Canon 350D), will be 100% equivalent to a single shot from a 16-bit & 12-stops DR camera (a FF camera with the same photosites as the Pentax K5 and a 16 bits ADC).

As long as sensors become less noisy, bracketing will become useless. This is a fact.
Title: Re: About bracketing for HDR and output devices DR
Post by: feppe on December 30, 2010, 01:45:32 pm
Perhaps we are only culturally trained into thinking that clipped whites and blacks (and occasionally some compression in the middle) is the most natural way of solving this problem, while some other society concievably could have convinced themselves into thinking that heavy tone-mapping was most natural. If digital processing was invented before film (or canvas), things could have turned out very differently.

While that's an intriguing proposition, it amounts to not much more than mental masturbation. Keke Rosberg (a Finnish Formula 1 race driver) had a great quote regarding such things: "if mother had balls she'd be the dad."
Title: Re: About bracketing for HDR and output devices DR
Post by: hjulenissen on December 30, 2010, 02:29:47 pm
While that's an intriguing proposition, it amounts to not much more than mental masturbation.
For something to be mental masturbation, I would have to do it in solitude, and not on a public forum, I think? :-)

My point was that "HDR"*) is controversial among photographers. Some think that it is the best thing since sliced bread, while others think that it is horrible. Some of the last cathegory will claim that HDR looks unrealistic (implicitly saying that regular LDR looks realistic). I dont think anyone can argue from mathematics that one or the other is more similar to the original - HDR preserves some aspects of the true scene, while regular LDR preserve other aspects of the true scene. So we are left with arguing what "looks more similar to me". We cannot throw out 100 years of cultural baggage instantly, but culture may change in years (while the human visual system may need 100 generations to change significantly). Therefore, the answer to my "mental masturbation" could tell us if HDR may be the accepted norm in 10 or 20 years, or if it will be a quickly passing fad.

People were sceptical towards stereo sound as well - and early recordings gave good reason. When creative people get a new tool, they tend to use it everywhere. Now, stereo is the norm, and recording technique have matured to use it with sense (or our culture have adopted to its sound, who knows). The excessive use of phasers and flangers on vocals and drums have all but disappeared, though (some might think that is a good thing) - either we could not get used to them, or producers could not create enough variation within that sound to still sound "fresh".

*)I adopt the imprecise convention of using the term "HDR" to describe the joint process of HDR capture (usually through multiple exposures) and tone-mapping to LDR
Title: paper "DR" is hopelessly low: displays might drive high brightness range viewing
Post by: BJL on December 30, 2010, 02:35:46 pm
Just like camera sensor DR is being improved, I believe that display DR is being worked on. I dont know about paper.
About paper, and any display relying on reflected light rather than transmitted or emitted, I am fairly sure that the brightness range displayed will stay well below what even the humblest SLR photosites are capable of recording. For one thing, the lowest reflectivity of any natural substance is about 2%, so the range from that to perfect 100% reflectivity is only about 50:1, or under 6 stops. Short of exotica like printing black with carbon fiber nanotube material (as in NASA's new super-black coating for flare control in telescope lenses), even 8 stops is out of reach of prints.

But this is maybe just one more reason that many of us now prefer the display screen to the print for more realistic, vibrant reproduction of what we saw when we were taking the photo. (And why many prefer slides to prints -- RIP Kodachrome, December 30, 2010.) I will admit that already, my favorite way to view my images is the big, bright, high brightness range 19"x13" images of a 23" diagonal computer screen, despite the relatively low resolution compared to what my files contain and prints can reveal. For one thing, where I would move closer to examine details within a large, sharp print, I can instead stay at a comfortable viewing distance and pan and zoom on the screen, so it is mostly enough for the display to have enough resolution for a "normal" viewing distance, roughly equal to image diagonal.

So maybe (at least to avoid the need for display pan&zoom), I should worry less about advancing the IQ of my "capture devices" and think more about when, if, and how my "display devices" will catch up with what my capture is already giving, in the sense of simultaneously displaying all the resolution and all the brightness range. Maybe something like the 326ppi of the new iPhone/iPod Touch "retina displays" scaled up to a 23" (or 19"x13" or A3) screen, for about 25MP. Or at least a more modest "photo quality" 200ppi, and so about 10MP.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: JeffKohn on December 30, 2010, 03:49:21 pm
Quote
My point was that "HDR"*) is controversial among photographers. Some think that it is the best thing since sliced bread, while others think that it is horrible. Some of the last cathegory will claim that HDR looks unrealistic (implicitly saying that regular LDR looks realistic). I dont think anyone can argue from mathematics that one or the other is more similar to the original - HDR preserves some aspects of the true scene, while regular LDR preserve other aspects of the true scene. So we are left with arguing what "looks more similar to me". We cannot throw out 100 years of cultural baggage instantly, but culture may change in years (while the human visual system may need 100 generations to change significantly). Therefore, the answer to my "mental masturbation" could tell us if HDR may be the accepted norm in 10 or 20 years, or if it will be a quickly passing fad.

Actually I think you can argue mathematically that the heavily stylized HDR look tends to be less realistic. It's pretty common to have tonal inversions in these types of images, where for instance the shadowed foreground is actually brighter than the daytime sky, just to name one very common example. So I don't really think you can argue that folks think this stuff looks unnatural just because film came first.  Maybe if the real world looked like the one in Avatar this argument might hold some water...

If some people like the stylized HDR look with aggressive "detail enhancement" that's fine. Different people have different tastes; and when it comes to art anything goes, so I certainly don't think that a naturalistic approach is the only valid one. I can appreciate truly well-done stylized HDR, even if it's not to my personal taste. The problem is, it's extremely rare. The vast majority of stylized HDR imagery is full of ugly artifacts that I just can't see past, and it boggles my mind that so many people don't seem to mind the ugly halos, color shifts, etc. Hopefully over time the tools will get better and this will improve; but right now I would say that the "bad" HDR outweighs the good by at least 10:1. So for a lot of people, this pretty much spoils the whole genre.


Title: Re: paper "DR" is hopelessly low: displays might drive high brightness range viewing
Post by: PierreVandevenne on December 30, 2010, 04:48:21 pm
reflectivity is only about 50:1, or under 6 stops.

Prints don't look like the ideal media to demonstrate a significant DR difference between DSLRs and large format cameras then...

PS: sorry, couldn't resist.  ::)
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: NikoJorj on December 30, 2010, 04:55:27 pm
It's pretty common to have tonal inversions in these types of images, [...]
One may find some tonal inversions (or shifts at least) in human vision too, see the well-known checkerboard :

(http://www.popularscience.co.uk/features/checkershadow-AB.jpg)

I do believe some slight haloing in tonemapping is unavoidable with nowadays tools, the thing is rather to hide the dust under the carpet (french proverbial to say that it should go under the radar).
Title: Re: About bracketing for HDR and output devices DR
Post by: RFPhotography on December 30, 2010, 08:58:31 pm

I'm not sure everyone realizes that the HDR technique involves a move into the space of absolute magnitudes, and away from relative white-black point of a single capture.  This is a conceptual shift.  I think some here are carrying over the assumption that HDR is just another tool for doing LDR, but the conceptual shift is more significant.

True enough, Luke.  But I think a big part of the reason that realisation is lacking is because, right now, the technology of the image is far ahead of the technology of the presentation.  The fact that we have to come back to an LDR space to work with and view these images takes away the advantage of; to some extent, and slows down the full understanding of the benefits of the larger bit space.  When the presentation technology - in particular monitors - catches up there's going to be a big 'Wow!' moment.   :)

Feppe, leave it to Keke to come up with an interesting quote.

Something else that doesn't work in all of this is the idea that bit depth and dynamic range are interdependent.  People are combining the concepts where no combination is required.  The two are not interdependent.  A higher bit depth does not, in and of itself, mean a higher dynamic range.  Bit depth simply means there are more in between tones from dmax to dmin.  So while it is true that you don't need to leave the low bit depth environment to increase dynamic range, moving into the high bit depth space (and floating point) has distinct advantages when you start trying to move those pixels around and particularly when you need to push the pixels around significantly to get them back into the kiddie-sized LDR pool. 

Sure, if sensors could capture 16 or 18 stops of brightness with absolute perfection in real world (as opposed to the lab) conditions it may reduce the need (although not eliminate) for HDR.  But if ifs and buts were candies and nuts we'd all have a Merry Christmas too.  The fact is cameras can't do that and while some may say it's inevitable - and it may be - my bet is it won't happen in the next 5 years so until then we use the tools we have at hand to the best of our abilities.
Title: Re: About bracketing for HDR and output devices DR
Post by: Guillermo Luijk on December 30, 2010, 09:42:34 pm
Sure, if sensors could capture 16 or 18 stops of brightness with absolute perfection in real world (as opposed to the lab) conditions it may reduce the need (although not eliminate) for HDR.  But if ifs and buts were candies and nuts we'd all have a Merry Christmas too.  The fact is cameras can't do that and while some may say it's inevitable - and it may be - my bet is it won't happen in the next 5 years so until then we use the tools we have at hand to the best of our abilities.

The journey will be progressive, it won't have a clear deadline. And this journey is already in progress. You don't need a 16-stops DR sensor to capture a 12 stops scene in a single shot. But you need to bracket a 12 stops scene if all you have is a 8-stops DR camera designed 5 years ago.

I shot this 12 stops scene in the Summer of 2007 with my Canon 350D (8 stops effective DR). Of course I needed to bracket {0, +4}:

(http://www.guillermoluijk.com/article/virtualraw/resultado_lite.jpg)


Today's Pentax K5 effective DR is 11 stops. Translating its 16 Mpx into a 38Mpx FF sensor using the same photosites (i.e. already existing technology), we get a sensor with about 12 stops effective DR for the same output resolution.

What 3 years ago I needed to bracket 4 stops, could be captured using today's technology in a single shot (just wait for the next FF cameras appearing). The same will happen tomorrow with 14 stops scenes, and the day after tomorrow with 16 stops scenes, and so on until 99% of the real world scenes don't deserve any bracketing. Slowly, but surely.

Regards
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: John Camp on December 30, 2010, 10:48:04 pm
I dont know him. Does his images look like they were taken at any random place at any random time, or does it look like he has carefully chose time, place and camera settings to make a visually pleasing image?

JC: Who knows? And what difference would that make?

If he in any way is "putting his soul" into his image, I would say that that could detract from the realism but add to the artistic value.

JC: It could detract from the realism, but add to the artistic value, but then again, maybe not. A person with an inane vision could put his soul into a work and have it come out...inane.

BTW, do you think that art should be valued from the end-result alone or does knowledge of the process add/subtract to its value?

JC: Could be either one.

If I show you an amazing image that blows your socks off (purely hypothetically speaking), would you be any less impressed if I told you I had made it purely in Photoshop?

JC: Probably. But that's just me. Other people might regard it as great art.

Or is the ideal that one should wait for weeks in a cold, deserted place waiting for "just the right light" and then capture that magic moment right before the batteries run out and being tragically eaten by a bear?

JC: I don't think there is an ideal.

You are mixing arguments here. If "lack of realism" is a valid argument against some art it should be a valid argument against all art.

JC: Really? If an argument against one woman is valid, is that an argument against all women? Frankly, this suggestion makes no sense at all. I'd heap further ridicule on it, but that that would take too much time.

If the critique is that it "is not suceeding in its own terms", then that it the argument that you should use.

JC: That is more or less the argument that I use, except that even if it does succeed in its own terms, it may not be art. My cat snapshots succeed in their own terms, but they remain cat snapshots. But if someone takes a stab an producing art, and the effort fails in its own terms, then it probably isn't high art.

You second statement statement seems irrelevant to what I said.

JC: I would disagree.

-h
Title: Re: About bracketing for HDR and output devices DR
Post by: John Camp on December 30, 2010, 10:52:38 pm
John, you've made the same mistake as many others.  You've done it with respect to art in general as opposed to the ones who address HDR specifically.  You've imparted your objective position onto a subjective subject.  And that is what's wrong.  And that's not a subjective issue.  HJ has suggested what he 'expects' art to be.  An expectation isn't a hard and fast, objective construct.  Anything that captures or freezes a moment in time isn't realistic.  If I can't go to that place and see exactly what is in that photo or painting or movie or drawing or 3D rendering then it's not realistic.  The only true realism is what I, or anyone else, can see with my own eyes.  I can choose to believe or not the reality someone else saw and the way they present that reality to me and accept it as real but it's not truly real to me. 

I don't think you read what I said carefully enough, because if I understand what you're saying, we're more or less off in the same direction.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 30, 2010, 10:53:02 pm
One may find some tonal inversions (or shifts at least) in human vision too, see the well-known checkerboard :

(http://www.popularscience.co.uk/features/checkershadow-AB.jpg)

The implications of this effect shown in the checkerboard, both squares A and B having mathematically the same brightness level (107, 107, 107), are mindboggling.

This effect, to varying degrees, will apply across the whole tonal range in any image, including mathematically identical color hues appearing visually quite different according to their context within the image.

I generally do not use specific programs that are described as tone mapping programs, but in practice I usually tone-map my images to get a result which I like, using the Shadows/Highlights tool, Brightness/Contrast tool, or simply selecting areas with the lasso tool, feathering significantly, then making whatever adjustments to color, tone and brightness I think appropriate.

I'm no wizard at using Photoshop, but one technique I use often to make adjustments whilst simultaneously protecting the highlights is, 'Ctrl + left click' on RGB Channels, invert the selection, go to Layers/New Adjustment Layer,  select whatever tool is appropriate (for example, Levels), set opacity to say, 80%, then make the adjustment.

This method allows one to brighten the entire image, whilst maintaining full detail in the highlights.  I find it very useful. I learnt this technique on Luminous Landscape.

Of course, if the darker parts of the image can't withstand brightening without revealing ugly noise, then you're stuffed  ;D .
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Schewe on December 30, 2010, 11:23:43 pm
If some people like the stylized HDR look with aggressive "detail enhancement" that's fine. Different people have different tastes; and when it comes to art anything goes, so I certainly don't think that a naturalistic approach is the only valid one. I can appreciate truly well-done stylized HDR, even if it's not to my personal taste. The problem is, it's extremely rare. The vast majority of stylized HDR imagery is full of ugly artifacts that I just can't see past, and it boggles my mind that so many people don't seem to mind the ugly halos, color shifts, etc. Hopefully over time the tools will get better and this will improve; but right now I would say that the "bad" HDR outweighs the good by at least 10:1. So for a lot of people, this pretty much spoils the whole genre.

Which was my original point regarding Uwe advocating and teaching HDR images that look surreal (which is how I respond rather than saying "stylized" which some how kinda lets people off the hook).

Compressing a high contrast scene into a printable dynamic range is indeed difficult. But it can be done without all the surreal downside. I would encourage people to actually learn how to do it so it isn't glaringly obvious. Which I'm not sure Uwe's tutorial does...
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: ErikKaffehr on December 31, 2010, 12:36:45 am
Hi,

Here is another try. Three exposures combined and than using LR-controlls for final image.

http://echophoto.smugmug.com/Special-methods/HDR/HDR/20101214-DSC09731/1142183148_uNBtR-X2.jpg

Best regards
Erik



Which was my original point regarding Uwe advocating and teaching HDR images that look surreal (which is how I respond rather than saying "stylized" which some how kinda lets people off the hook).

Compressing a high contrast scene into a printable dynamic range is indeed difficult. But it can be done without all the surreal downside. I would encourage people to actually learn how to do it so it isn't glaringly obvious. Which I'm not sure Uwe's tutorial does...
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on December 31, 2010, 02:15:06 am
Actually I think you can argue mathematically that the heavily stylized HDR look tends to be less realistic. It's pretty common to have tonal inversions in these types of images, where for instance the shadowed foreground is actually brighter than the daytime sky, just to name one very common example. So I don't really think you can argue that folks think this stuff looks unnatural just because film came first.  Maybe if the real world looked like the one in Avatar this argument might hold some water...
There are errors in tonemapped images, yes. Do you think that blown-out highlights and clipped blacks are a part of what you normally see in a scene? So there are errors in regular images as well. I do not see any attempt at bringing out mathematical tools to support your statement?

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on December 31, 2010, 02:22:06 am
Your way of quoting makes it very hard for readers and repliers.

Quote
JC: Who knows? And what difference would that make?
I am supporting my claim that I expect art to be something different than taking a snapshot of reality. That was why you started this discussion with me in the first place, was it not?
Quote
Quote
You are mixing arguments here. If "lack of realism" is a valid argument against some art it should be a valid argument against all art.
JC: Really? If an argument against one woman is valid, is that an argument against all women? Frankly, this suggestion makes no sense at all. I'd heap further ridicule on it, but that that would take too much time.
If you are saying that "HDR is crap because it is not realistic", then you are saying that not being realistic makes it crap. If you at another stage claim that some other imagery is great even though it is not realistic, then you are not honest in your arguments.

If a rule is universal, it must be valid everywhere. If it is not universal, it should carry a disclaimer. DR sucks because it is unrealistic, and I dont lik"
Quote
Quote
You second statement statement seems irrelevant to what I said.
JC: I would disagree.

That is your right. If you wont bother relating your statements to mine, I wont bother replying.
-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: stever on December 31, 2010, 09:25:11 am
thanks for the encouragement Schewe, (i do believe it can be done) - does anyone have a tutorial have a tutorial that teaches how it's done?
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on December 31, 2010, 11:30:13 am
It's not possible, Steve.  The reason it's not possible is because every image is different.  There are no 'set smoothing to 25, brightness to 50, saturation to 30, etc.' formulae for more realistic images.  There's a learning curve involved.  It's also not possible because every software app. is different.  Time needs to be spent learning the software, how it works, what the various tonemapping operators do, how they work independently and how each affects the others in combination. 

It is possible to make general statements about how different operators impact an image and within those general statements one can get an idea of where to start to get a more 'natural' result.  But that's really as far as it can go.  That's what I've done in my HDR tutorial.  I also have three presets people can download and use that offer starting points for three different 'looks' - a slightly unreal, sort of graphic illustration look, a natural look and a hyper-grunge look. 
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on December 31, 2010, 08:11:39 pm
It's not possible, Steve.  The reason it's not possible is because every image is different.  There are no 'set smoothing to 25, brightness to 50, saturation to 30, etc.' formulae for more realistic images.  There's a learning curve involved.  It's also not possible because every software app. is different.  Time needs to be spent learning the software, how it works, what the various tonemapping operators do, how they work independently and how each affects the others in combination. 

It is possible to make general statements about how different operators impact an image and within those general statements one can get an idea of where to start to get a more 'natural' result.  But that's really as far as it can go.  That's what I've done in my HDR tutorial.  I also have three presets people can download and use that offer starting points for three different 'looks' - a slightly unreal, sort of graphic illustration look, a natural look and a hyper-grunge look. 


I compeletely agree with Bob here.  If you are looking for a general formula that can be applied to make an image look natural, then you might as well just shoot in jpeg mode and let the camera apply its own built-in adjustments.

A number of different exposures which have been merged to HDR, becomes a single image which needs to be adjusted as any single RAW image needs to be adjusted during and after conversion. It's rare that an image can look exactly right with just a click on the 'auto' button in ACR. If it does, it will still need further adjustment in 'proof mode' before printing.

If the result doesn't look satisfactory, for whatever reason, then the photographer is to blame (or the person who processed the image). Don't blame the tool. Photoshop is an amazing tool for image adjustment.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: LKaven on January 01, 2011, 01:09:14 am

I compeletely agree with Bob here.  If you are looking for a general formula that can be applied to make an image look natural, then you might as well just shoot in jpeg mode and let the camera apply its own built-in adjustments.

A number of different exposures which have been merged to HDR, becomes a single image which needs to be adjusted as any single RAW image needs to be adjusted during and after conversion. It's rare that an image can look exactly right with just a click on the 'auto' button in ACR. If it does, it will still need further adjustment in 'proof mode' before printing.

If the result doesn't look satisfactory, for whatever reason, then the photographer is to blame (or the person who processed the image). Don't blame the tool. Photoshop is an amazing tool for image adjustment.
All true and more.  It's a little like re-lighting and re-taking the picture. 

And the common tools have arcane user interfaces.  Many common settings are calibrated according to a "0-100" scale, which are simple to remember mentally, but give no indication of the underlying variables being manipulated, nor what their true values are.  The combinatorial possibilities of the controls are too many, and to get a good image, you really have to learn to either play it like an instrument, or to use what you get in the most honestly artistic way possible.
Title: Re: About bracketing for HDR and output devices DR
Post by: LKaven on January 01, 2011, 01:48:05 am
A bit dissapointed to read this:

It makes me think Steinmueller didn't really get that the point of bracketing for HDR will soon be unnecessary, and it does not participate in the definition of HDR itself. The only reason we have today for bracketing HDR scenes is that sensors are still too noisy to capture in a single shot the entire DR of many real world scenes.
Guillermo!  I'm surprised to hear this from you, as a man who is as concerned about signal optimization.

What we are talking about here is a broader concept.  Imagine just in one part, the idea of allocating bits /where the information is/ as a central concept.  Think of that in terms of this one practical case.

Recently I did a portrait of someone in a church space, where the subject was lighted by a stained glass window.  The inside of the church was dim, but beautiful.  Now even with a D3x, a camera with good dynamic range at ISO 100, I could not capture any detail whatsoever on the inside of the church.  It simply came out mostly RGB=0,0,0.  There were a few single bit quantities, but nothing discernable.  

It is important to ask here - why must the church be black?  It doesn't look black to me.  But it is arbitrarily the case, partly by virtue of the original physical chemistry employed, that each individual exposure has an implicit black point and an implicit white point, both of which are /false/.  

What I could have done was (1) shoot the background in HDR, (2) do portrait takes, and (3) composite them.  But this would just be a way of allocating bits to the relevant content.   Audio encoding schemes allocate bits where the psychologically salient information lies.  Photography should record visual stimuli where the salient information lies -- in absolute magnitude space.  

By supersampling the scene inside the dim church, I could have collected that information.  We're no more obliged than any painter to make the dynamic range of a source image correspond to the dynamic range of the output medium.  The sun can burn orange, as viewed from the interior of a candlelit chamber through the window onto a blue sky peppered with cirrus clouds.  And when you paint the candlelit interior, it will be detailed.    Don't we, with our inherently diachronic visual system, kind of see it this way?

Now finally, imagine this.  A camera to be could be capable of flexibly and adaptively supersampling a scene by allocating collection of information, locally as well as globally, over a given shutter interval.  Under its control could be differential gain between pixels, and localized multiple exposure.  Normalization and averaging could be done in camera.  The reason this would have to be done in camera is because in practical terms, you are interested in events that last for only about 1/60th of a second.  In order to carry out a complex "superexposure" program, you'd have to hand the task over to the camera.  

I really believe this is coming.  And this tells you something of why I think this is more than a fad involving cheesy special effects.
Title: Re: About bracketing for HDR and output devices DR
Post by: HCHeyerdahl on January 02, 2011, 02:50:51 pm
I'm not sure everyone realizes that the HDR technique involves a move into the space of absolute magnitudes, and away from relative white-black point of a single capture.  This is a conceptual shift.  I think some here are carrying over the assumption that HDR is just another tool for doing LDR, but the conceptual shift is more significant.

This is an interesting thread and I would like to understand this potential conceptual shift.
Up til now I thought the HDR-space was just a much larger space than a LDR-space (and hence the need for tonemapping), but in case I am missing something important, is it possible to explain a bit about this space of absolute magnitudes compared to a realtive black-white point?

Christopher
Title: Re: About bracketing for HDR and output devices DR
Post by: LKaven on January 03, 2011, 12:24:01 am
This is an interesting thread and I would like to understand this potential conceptual shift.
Up til now I thought the HDR-space was just a much larger space than a LDR-space (and hence the need for tonemapping), but in case I am missing something important, is it possible to explain a bit about this space of absolute magnitudes compared to a realtive black-white point?
The HDR file is a kind of special case.  At first glance, it's just a TIF file with 32 bits.  But the data represent something other than pixels.  Think of this as a dataset of measurements, out to a good number of decimal places. 

I might characterize this space as a space of /fixed/ magnitudes as opposed to /relative/.  It might be better said that the range of magnitudes represented is literally astronomical and practically unconstrained, encompassing within one scene all possible photographic subjects in their various illuminations--fancifully, from the black cat in the coal mine, to the plasma beach on the surface of the sun--and with fine gradation.

The dataset of measurements is independent of rendering intent.  The rendering method is left open to creative and technical choice.  Methods will be refined and new ones invented.  The various "tonemapping" concepts are just a first approximation.

As a practical benefit, imagine if the 1,2,3-bit quantities in your current single-shot captures were 32-bit quantities resolved out to N decimal places?  It gives your tools something to dig into.  You could do fine-grained work on parts of your picture with high fidelity, then map them to the tonality you want.  You could even make significant changes in apparent lighting.  You could revisualize a low-key shot into a high key shot or vice-versa, selectively, with no apparent loss in image quality.
Title: Re: About bracketing for HDR and output devices DR
Post by: hjulenissen on January 03, 2011, 02:44:31 am
So one can postpone exposure choices until the files are one your hard-drive. More generally, if the scene/camera allows (strictly static scenes), a 'truer' representation of the scene can be captured, and choices about exposure, black/white clipping, tonecurve etc that used to be choices of camera settings (and camera algorithms/film behaviour) can be freely chosen in the editing process of HDR. Some of these processing options may be for artistic reasons, but I would argue that most of them are to shoehorn something visually pleasing into the limited paper/display tech that we have today. We dont know what we will have in 10 years, but todays HDR pics may look better then.

I still dont think that HDR is usually 'absolute' in the sense that it is easily expressed in lumens or candela or whatever. It is still usually some arbitrary unit, but relative to that unit, all measurements are linear (and black/white clipping can be practically avoided)?

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: HCHeyerdahl on January 03, 2011, 04:40:22 am
Aha!  Thanks to both,this was clarifying :-)

Christopher
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on January 03, 2011, 07:56:52 am
What Luke is talking about goes to the origins of HDR which are in the CG world.  Oloneo has a feature in their software called HDR Relight that, apparently (haven't tried it yet myself), allows the user to control indvidual light sources within an image via multiple blended exposures.  It's not a 32 bit function (yet) but it's an interesting first step.  I put together a 'wish list' for Adobe on my blog and one of the things I wished for was the ability to selectively tonemap different areas of an image (without tonemapping multiple times and blending different tonemap versions after the fact) which would then really start to take us to the ability to relight a scene.  For HDR to show its full potential we need, at least, monitors that can display the entire brightness range so we can get a feel for what our true starting point is and where we want to take it from there.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on January 03, 2011, 09:49:40 pm
I put together a 'wish list' for Adobe on my blog and one of the things I wished for was the ability to selectively tonemap different areas of an image (without tonemapping multiple times and blending different tonemap versions after the fact) which would then really start to take us to the ability to relight a scene. 

Just a couple of points here, Bob. Photoshop already gives us the facility to selectively tonemap different areas of the image. Just use the lasso tool to select an area, feather significantly, say 100 or even 200 pixels depending on the size of the slection, then make whatever adjustments in brightness, contrast color etc you think appropriate.

Part of the skill is in the choice of a suitable degree of feathering so the transition in tonality between the inside and outside of the selection does not appear unnatural.

Quote
For HDR to show its full potential we need, at least, monitors that can display the entire brightness range so we can get a feel for what our true starting point is and where we want to take it from there.

Wouldn't this present enormous problems for proofing? Uwe mentioned in his Part I article that the human eye has a dynamic range of about 10 stops, which seems similar to that of a modern DSLR. The eye is said to have a maximum DR of around 24 stops only when taking into consideration the full range of aperture changes that the eye's pupil is capable of.

Such extreme changes in aperture would be caused, for example, when shifting one's gaze from a bright part of a sky where the sun is partially visible as it peeks through the clouds, to the scene of a black cat sitting in the shade of dense undergrowth in the near foreground.

To capture such a scene with autobracketing, not even a Nikon would be sufficient with its 9 exposures of 1 EV interval, providing an additional 8 stops of DR.

Of course, with 9 exposures which might vary between 1/3000th and 1/10 of a second, movement in the scene can be an insurmountable problem.

However CS5 has offered an impressive solution in HDR-2, with its 'Remove Ghosts' feature. This feature must be very useful for Psychics and Spiritualist Mediums who wish they could stop seeing ghosts.  ;D

Here's a scene of the living room of a friend I'm visiting over the Christmas/New Year break, and crops of the processed HDR images, with and without ghost removal.

Now I ask you, are these images surrealistic? Untidy, maybe! But surrealistic?... no!

I'm very surprised and very impressed with the ghost removal result in this particular example. In order to reduce the possibility of movement as much as possible, I used ISO 1600 for these shots. Exposures varied from 1/3000th to 1/10th, and the shadows are still noisy. At the base ISO of the D700, the maximum exposure would have been a full second, improving SNR in the dark parts significantly but probably causing too much blur for the 'remove ghosts' feature to handle.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Guillermo Luijk on January 04, 2011, 09:03:32 am
Ghost removal is nothing to be surprised at IMO Ray. In fact what is surprising to me is that HDR software don't have it implemented time ago.

The cause of ghosting is not actually moving parts in the scene. The cause of ghosting is the HDR software building the output image taking information from more than one source file in an area where some element was moving. So to avoid ghosting we just need to tell the software: "hey, in this area always take information from a single source file", and ghosting will be gone. If the software is clever, it will analyse the affected area and choose the most exposed non-clipped source file, and will simply take all the information from it for that part of the scene.

Eliminating ghosting is not for free, it will usually mean the de-ghosted areas will be noisier, but the price to pay is always lower than having a man with a leg cut:

No anti ghosting:
(http://www.guillermoluijk.com/tutorial/zeronoise/ag2_antes.jpg)


Manual anti ghosting:
(http://www.guillermoluijk.com/tutorial/zeronoise/ag2_despues.jpg)

(http://www.guillermoluijk.com/tutorial/zeronoise/zn.jpg)


One of the advantages of increasing DR sensors will be that if we can capture the entire DR of a scene in a single shot, ghosting will be history. Bracketing is a patch for insufficient DR sensors, and ghost removal is a patch for non-static scenes captured with insufficient DR sensors.

Regards
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on January 04, 2011, 09:37:22 am
Yes, Ray, of course you can use some of the regular editing tools in PS on HDR images via selections.  The term for that is 'soft tonemapping'.  What I'm referring to; however, is the ability to use the HDR Toning tool on a 32 bit image (or 16 bit, for that matter) with a selection active.  That can't be done currently.

Yes, the eye has an 'on the fly' variable aperture that allows it to have a large dynamic range.  I'm not going to get into whether the static drange is 10 stops or 8 stops or 15 stops.  But if a monitor could display something like 15 or 16 stops of brightness, that would be far better than what we have now and would, I'd think, cover a (large) majority of the HDR images being created.  If a camera sensor can reasonably capture 8 of useable brightness (talking real world conditions, not lab/test bench conditions) which, I think, is a decent assumption and you've got a +/-4 bracket, that's 16 stops.  In my own experience, it's pretty rare that I need to go more than that to capture the full range of a scene.  I think that wouldn't be difficult for the eye/brain combination to process.  When I'm editing, I dont' stare at the middle of the screen, I move my eyes around so the variable aperture of the eye would come into effect and allow the photographer to see the range that the monitor presents.

Beyond that, printers/paper would then need to be upgraded significantly to handle all that brightness range.  Ultimately, I think monitors will get there.  I don't think printers/paper will. 

GL, that's what the deghosting process in CS5 HDR Pro attempts to do.  It lets you choose what exposure you want to use as the base for deghosting.  Other software with 'selective' or 'semi-manual' deghosting do similar things.  I do have to say though, that you'd have to think there are some pretty smart people doing the programming for these HDR applications so if they're having difficulty getting deghosting processes to work well, maybe it's a little more difficult than you want to make it out to be.  If it's not, then perhaps you could create a software app. that's useable by people and solve everyone's problems.  And make yourself wealthy in the process.   ::)
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Guillermo Luijk on January 04, 2011, 09:57:12 am
that you'd have to think there are some pretty smart people doing the programming for these HDR applications so if they're having difficulty getting deghosting processes to work well, maybe it's a little more difficult than you want to make it out to be.  If it's not, then perhaps you could create a software app. that's useable by people and solve everyone's problems.  And make yourself wealthy in the process.   ::)

I do think what you say, that is precisely why I am surprised to find that we needed to wait till 2010 to start seeing any real antighosting features in commercial apps (the anti ghosting checkbox in Photomatix is just a joke, it only reduces the progresiveness in the blending, which is unsufficient and moreover affects the entire image). The only explanation I find is not that they were having any difficulty as you say, is simply that they didn't put focus on this matter.

Achieving an effective antighosting is not difficult at all, I have already done it into my own blending app and I don't consider myself smarter than anyone. In fact the example above comes from it, that B&W image you see is the automatically generated blending map. The user just needs to detect the conflict area (something which can also be automated by correlating the source images, but IMO is not worth) and brush the blending map with the brightest gray participating in the area, as I did above. This forces the most exposed non-clipped source file to be the only one used in the area, and the problem is solved.

Regards

PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology. The following plot from DxO Mark is very clarifying about the big step taken by Sony with its new sensor in DR (just look at the trend and relative DR figures, the absolute DR figures are too high for us photographers since they were obtained with the SNR=0dB criteria):

(http://www.guillermoluijk.com/article/perfect/dxomark.gif)
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on January 04, 2011, 10:35:12 am

PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology.
 

As I said, I'm talking real world shooting conditions, not in a lab on a bench test.  When those types of images start to be available for for evaluation and comparison and when comparisons of those 'real' images to other cameras are done, then I'll start to believe the hype.  Until then..... And that's two cameras.  Others still don't make it that far. 
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Guillermo Luijk on January 04, 2011, 10:57:35 am
As I said, I'm talking real world shooting conditions, not in a lab on a bench test.  When those types of images start to be available for for evaluation and comparison and when comparisons of those 'real' images to other cameras are done, then I'll start to believe the hype.  Until then..... And that's two cameras.  Others still don't make it that far.
Sensor performance is the same in real world than in the lab, basically because labs are located in the real world. I have extensively used my Canon 350D in shooting interiors, and it never performed worse than when measuring its DR at the lab (if my room at home can be considered a lab). The 350D was an APS-C sized sensor camera launched in the beginning of 2005, having an effective DR of 8 stops.

I have measured myself the DR of the Pentax K5 sensor, and firmly believe in what I did and what I got, which is consistent with other measurements.

This real world capture was 6 stops underexposed (i.e. the first 6 stops in the RAW histogram are empty, the JPEG displays nearly pure black) on a Pentax K5, and produced the following image with a level of noise still acceptable: click here (http://img46.imageshack.us/img46/7912/22681234.jpg).

Find here a Nikon D7000 vs Fuji S5, D90, D700 DR evaluation: Nikon D7000. Comparing DR in a scene (http://mato34.es/d7000/randin/comp_scn/). The D7000 performed the same as the Fuji S5, with the Super CCD sensor.

Regards
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on January 04, 2011, 11:05:29 am
I'm not going to get into a pissing contest with you, GL. 
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Guillermo Luijk on January 04, 2011, 11:10:13 am
I'm not going to get into a pissing contest with you, GL. 
You do well. Next time you decide to be ironic to someone, make sure you have the needed resources.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: BJL on January 04, 2011, 12:25:12 pm
BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria)
Guillermo,
    how do you get that figure of 11 stops? From what I have read, that sensor has full well capacity of about 30,000e-, so 11 stops down is a signal of about 16e-, and then shot noise is 4e- RMS, limiting SNR to 4:1. Is that figure of 12dB (16:1) computed only with respect dark noise and read noise, not photon shot noise?

To put it another way, that target of 12dB or 16:1 SNR (which seems reasonable to me for tolerable shadow noise) requires a signal of at least 2^8=256 photons detected even if the noise generated within the camera is negligible, and to have that photon count 11 stops below maximum signal requires the ability to count up to 2^8*2^11=1^19 photons, a bit over 500,000. With well capacity of about 32K or 2^15, the limit is seven stops about that 12dB threshold.


P. S. [Added later] It just occurred to me that you might be using the strange "power referred" use of dB, so a factor of two in SNR is 6dB. Then the numbers are consistent, with 12dB meaning 4:1 SNR.  But I am not sure how good a local SNR as low as 4:1 can look even in very dark parts of the displayed image.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on January 04, 2011, 01:33:44 pm
You do well. Next time you decide to be ironic to someone, make sure you have the needed resources.


It's not a matter of resources.  It's a matter that there's no point in trying to have a reasoned discussion with the hardheaded.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on January 04, 2011, 02:29:10 pm
Regarding "anti-ghosting".

Motion compensation should have more potential than simply scewing the blending in favor of a single source image for some regions.

The problem of "leaves moving slightly in the wind" should be quite different from "camera movement", which again is different from "subject rotating, exposing a different side at different times".

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on January 04, 2011, 07:09:39 pm
But if a monitor could display something like 15 or 16 stops of brightness, that would be far better than what we have now and would, I'd think, cover a (large) majority of the HDR images being created.  create a software app. that's useable by people and solve everyone's problems.  And make yourself wealthy in the process.   ::)

Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.

However, if the monitor were the size of a wall, the room would be so brightly lit that the shadows would appear like midtones.

My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?

I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?

I think maybe in order to appreciate the maximum dynamic range of one's display, the viewing room needs to be essentially a 'black box', ie all walls, floor and ceiling painted non-reflective matte black.

I just came across a Wikipedia article in which it is claimed the retina has a static contrast ratio of only 6 1/2 stops. Here's the relevant extract.

Quote
The retina has a static contrast ratio of around 100:1 (about 6½ f-stops). As soon as the eye moves (saccades) it re-adjusts its exposure both chemically and geometrically by adjusting the iris which regulates the size of the pupil. Initial dark adaptation takes place in approximately four seconds of profound, uninterrupted darkness; full adaptation through adjustments in retinal chemistry (the Purkinje effect) are mostly complete in thirty minutes. Hence, a dynamic contrast ratio of about 1,000,000:1 (about 20 f-stops) is possible. The process is nonlinear and multifaceted, so an interruption by light merely starts the adaptation process over again.


Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on January 05, 2011, 04:27:57 am
Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.
If the camera-monitor reproduction chain was able to reproduce the original dynamic range of the scene, and the size/distance to the monitor was similar to the angle you would have observed if you were at the scene (or using binoculars mimicing the tele-lens used, if any), then the stimuli in the room should be similar to "being there". Of course, some real-life scenes have a DR that make them visually hurtfull or not pretty.

If the monitor fill only a part of your field of view, and the rest is filled with windows or walls of a very different brightness there might be perceptuall issues.
Quote
My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?

I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?
I think that plsmas have a fantastic DR due to being able to fully turn a pixel 'off'. I also think that they cannot reproduce very dark grays just above that "black" becuse they are pulse-modulated, and a brightness slightly above 'off' would be perceived as flickering.
Quote

I think maybe in order to appreciate the maximum dynamic range of one's display, the viewing room needs to be essentially a 'black box', ie all walls, floor and ceiling painted non-reflective matte black.

Would not a display technology that did not reflect anything from the room be enough?

-k
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: thierrylegros396 on January 05, 2011, 06:32:39 am
Bob,
I'm having trouble getting my mind around this. It would seem to me that such a monitor, capable of displaying 15 or 16 stops of DR, would have to be so bright in order to display the brightest parts of an HDR capture, it would dazzle and hurt the eyes, unless the monitor were the size of a wall so that the eye could exclude the brighter parts as it focussed attention on the darker parts.

However, if the monitor were the size of a wall, the room would be so brightly lit that the shadows would appear like midtones.

My Panasonic plasma HDTV claims to have a contrast ratio of 2 million to 1. How many stops of DR is that? About 21?

I see the latest Panasonic models claim a CR of 5,000,000:1, 10/12 bit color depth, and 6,144 steps of gradation. Can anyone decipher these figures for me?


Marketing figures, just like the 178 ° Viewing Angle !!!  :D :D ;)

Thierry

Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Peter_DL on January 05, 2011, 05:27:24 pm

PS: BTW, the last Sony sensor used in the Pentax K5 and Nikon D7000 can capture in a single shot 11 stops of DR with acceptable noise (SNR=12dB criteria), and this technology translated to a FF sensor would mean no less than 12 effective stops. So your 8 stops figure is out of date with today's technology...
(http://www.guillermoluijk.com/article/perfect/dxomark.gif)

Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.

Peter

..
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: ErikKaffehr on January 06, 2011, 01:18:52 am
Hi,

Lightroom can do pretty decent job.


Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.

This is a non HDR image developed in Lightroom: http://echophoto.smugmug.com/Special-methods/HDR/HDR/13306153_DcZHj#1002864735_dkeci

and this is a HDR image using Merge to HDR in PSCS5:

http://echophoto.smugmug.com/Special-methods/HDR/HDR/13306153_DcZHj#966794997_wt4h6

Best regards
Erik



Peter

..
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on January 06, 2011, 09:28:14 am
If the camera-monitor reproduction chain was able to reproduce the original dynamic range of the scene, and the size/distance to the monitor was similar to the angle you would have observed if you were at the scene (or using binoculars mimicing the tele-lens used, if any), then the stimuli in the room should be similar to "being there". Of course, some real-life scenes have a DR that make them visually hurtfull or not pretty.

That doesn't quite make sense to me in light of the information I have gleaned from the internet regarding the eye's dynamic range. With a fixed gaze, Wikipedia claims 6 1/2 stops. Uwe Steinmuller claims 10 stops. Perhaps the differences in these estimates could be attributed to involuntary, semi-microsaccadic eye movements resulting in slight shifts in pupil aperture causing variability in dynamic range, or perhaps they could be attributed simply to general variability of human eyesight.

You must have noticed how your own eye behaves when focussing precisely on a specific part of the monitor, photograph or printed page or newspaper. The angle of view for precise focus and maximum clarity is surprisingly narrow; of the order of 2 degrees I believe, so even when viewing an 8x10" portrait hanging on the wall, from a distance which is not particularly close, say 1 metre, it's impossible to focus precisely on both eyes in the portrait simultaneously, without shifting one's own eyes slightly, from left to right; just as it's not possible to focus on the entire page of even a small book. One has to move the eyes as one scrolls down the page.

To give you a more graphic example, imagine a shot with a telephoto lens of a bird sitting on a branch, silhouetted against the enlarged, firey ball of the setting sun.

The brightness of the sun would cause the eye's pupil to contract. The bird, with its extreme backlighting, would appear very dark. The color of its plummage would be undetectable, because the eye cannot simultaneously have a wide and a narrow aperture. Even when the eye's focus is precisely on the bird, through the camera's viewfinder, the brightness of the background sun will ensure the pupil's aperture remains small.

Supposing we decided to bracket exposure so we could see the full color of the bird's plummage in all its detail, say 9 exposures with a 1 EV interval giving us an additional 8 stops of DR, so that after merging to HDR the dynamic range in the image is a good 16 stops.

Supposing we display that HDR image on a monitor which has a DR capability of 16 stops. What would be the purpose if the eye can only encompass a DR of something between 6 1/2 and 10 F stops? Get my point?

On reflection, perhaps that's the point you were making all along. There's no point in having a monitor with a greater dynamic range than the eye can encompass within a certain angle of view that 'more or less' takes in the whole monitor, even though precise focus will involve a small amount of eye movement.

Quote
Would not a display technology that did not reflect anything from the room be enough?

Not sure. Here's what Wikipedia has to say on the advantages of glossy screens, assuming that the glossy screens have some degree of ant-glare coating.

Quote
In controlled environments, such as darkened rooms, or rooms where all light sources are diffused, glossy displays create more saturated colors, deeper blacks, brighter whites, and are sharper than matte displays. This is why supporters of glossy screens consider these types of displays more appropriate for viewing photographs and watching films. Also, in extremely bright conditions where no direct light is facing the screen, such as outdoors, glossy displays can become more readable than matte displays because they don't disperse the light around the screen (which would render a matte screen washed out).

Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Peter_DL on January 06, 2011, 12:35:44 pm
Lightroom can do pretty decent job.

Fill Light is pretty cool.
Recovery may leave room for improvement (http://imagingpro.wordpress.com/2008/12/03/expanding-the-dynamic-range-of-a-single-raw-file/).

Peter

--
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Guillermo Luijk on January 06, 2011, 12:41:55 pm
Guillermo,
how do you get that figure of 11 stops? From what I have read, that sensor has full well capacity of about 30,000e-, so 11 stops down is a signal of about 16e-, and then shot noise is 4e- RMS, limiting SNR to 4:1. Is that figure of 12dB (16:1) computed only with respect dark noise and read noise, not photon shot noise?

To put it another way, that target of 12dB or 16:1 SNR (which seems reasonable to me for tolerable shadow noise) requires a signal of at least 2^8=256 photons detected even if the noise generated within the camera is negligible, and to have that photon count 11 stops below maximum signal requires the ability to count up to 2^8*2^11=1^19 photons, a bit over 500,000. With well capacity of about 32K or 2^15, the limit is seven stops about that 12dB threshold.

P. S. [Added later] It just occurred to me that you might be using the strange "power referred" use of dB, so a factor of two in SNR is 6dB. Then the numbers are consistent, with 12dB meaning 4:1 SNR.  But I am not sure how good a local SNR as low as 4:1 can look even in very dark parts of the displayed image.

EDIT: yes, 12dB means linear SNR=4 in my calculations (dB=20*log(lin)). I created these synthetic noisy images (http://www.guillermoluijk.com/article/digitalp02/ruido_0db_12db.jpg) to find out how much noise was 12dB and 0dB, and found 12dB to be the maximum acceptable. The images were created from linear noisefree images in linear colour space, added gaussian noise to get the desired StdDev, and then converted to non-linear sRGB and applied a contrast curve, emulating what we do when processing our images.

The 11 stops figure is strictly based on calculated SNR ratio, including all kinds of noise since it comes from real captures over a colourchecker card (read noise, photon noise, PRNU,...). It is not a per-pixel DR though, but normalised to 12,7Mpx (Canon 5D) of output resolution through simple statistics for fair comparision purposes with other 3 cameras.

These are the per-pixel SNR measurements (DR here would be 10,75EV):
(http://www.guillermoluijk.com/tutorial/noisedr/curvassnr.gif)

And these after normalisation (DR becomes 11,2EV):
(http://www.guillermoluijk.com/tutorial/noisedr/curvassnrnorm.gif)

No idea how this matches the electronic parameters of the sensor, but these were the SNR measurements I did, have a look at them here (http://www.guillermoluijk.com/download/ruidoyrangodinamico.xls).
At Sensorgen.info (http://www.sensorgen.info/) they calculated full well capacity for the Pentax K5 to be 47.159, being 3,3 read noise at base ISO:

At -11EV:
S=47159-11EV=23,0
read noise=3,3
photon noise=(23,0)^0,5=4,8
total noise=(3,3^2+4,8^2)^0,5=5,8
SNR_dB=20*log(23,0/5,8)=11,9dB

Interesting plot and lesson about the evolvement of capture DR.
So I'd expect that we are going to see more and more sliders for ('HDR') tone mapping in the Raw converter.
Some of it will be truly appreciated.

I agree about your comment on RAW developer abilities. In fact I would suggest that an enhanced approach could be desirable in the software. In the same way as the RAW developer is aware of the lens capabilities in order to correct distortion and CA, it could be aware of every particular sensor used to estimate usable captured DR and calculate optimum settings without so much user intervention. A RAW file from a Pentax K5 at ISO80 shot over a high DR scene has much more usable information than the same RAW file from an Olympus camera at ISO1600. I find the present highlight and shadow recovery sliders approach a bit primitive.

Another DR plot to think about: which of the two top selling brands has taken more care of DR on its cameras over the years? (the plotted lines represent the highest DR APS-C camera from each brand at every time):

(http://www.guillermoluijk.com/article/perfect/dxomark2.gif)

Maybe Canon priorized Mpx over DR too much.

Regards
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: RFPhotography on January 06, 2011, 01:11:49 pm
Fill Light is pretty cool.
Recovery may leave room for improvement (http://imagingpro.wordpress.com/2008/12/03/expanding-the-dynamic-range-of-a-single-raw-file/).

Peter

--

Really nice technique, Peter.  Like it!  Thanks.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on January 06, 2011, 01:54:59 pm
Supposing we display that HDR image on a monitor which has a DR capability of 16 stops. What would be the purpose if the eye can only encompass a DR of something between 6 1/2 and 10 F stops? Get my point?
The eye
http://en.wikipedia.org/wiki/Adaptation_(eye)
Quote
The human eye can function from very dark to very bright levels of light; its sensing capabilities reach across nine orders of magnitude. This means that the brightest and the darkest light signal that the eye can sense are a factor of roughly one billion apart. However, in any given moment of time, the eye can only sense a contrast ratio of one thousand.[citation needed] What enables the wider reach is that the eye adapts its definition of what is black. The light level that is interpreted as "black" can be shifted across six orders of magnitude—a factor of one million.

The eye takes approximately 20–30 minutes to fully adapt from bright sunlight to complete darkness and become ten thousand to one million times more sensitive than at full daylight. In this process, the eye's perception of color changes as well. However, it takes approximately five minutes for the eye to adapt to bright sunlight from darkness. This is due to cones obtaining more sensitivity when first entering the dark for the first five minutes but the rods take over after five or more minutes.[1]

I was simply striving for the simple goal of reproducing reality. If real scenes can have a large dynamic range, I would like for all that to be perfectly reproduced end-to-end. If we ever get there, we will see if it is worth it. I am certain that some scenes contain a large DR that I cannot reproduce using current non-HDR capture and display, but thatI can make sense of when "being there". This suggests to me the potential in a high-DR reproduction system.

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on January 07, 2011, 08:09:57 pm
I was simply striving for the simple goal of reproducing reality. If real scenes can have a large dynamic range, I would like for all that to be perfectly reproduced end-to-end. If we ever get there, we will see if it is worth it. I am certain that some scenes contain a large DR that I cannot reproduce using current non-HDR capture and display, but thatI can make sense of when "being there". This suggests to me the potential in a high-DR reproduction system.

To reproduce reality you would need a 3-D monitor or 3-D print for a start. However, the problems of insufficient dynamic range in the reproduction chain has already been solved for static subjects, using exposure bracketing.

Having captured the scene with its full dynamic range, the problem is not the lack of a monitor which can display that full dynamic range, but the lack of skill and technique of image processing in order to compress that captured dynamic range to something that matches the compressed 'field of view' of the print or monitor, and the compressed dynamic range of the eye which is reduced to a 'more or less' fixed gaze when viewing that reproduction.

If one compresses the field of view in the reproduction, as any monitor must do when displaying any scene taken with only a moderately wide lens, it is appropriate in the interests of realism to compress the dynamic range, because the eye, when viewing the reproduction, does not have the opportunity to dilate and contract to the same degree as it did when viewing the original scene.

The 5,000,000:1 contrast ratio of a modern plasma screen should be sufficient, even allowing for a little marketing hyperbole  ;D .
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Dave Millier on January 08, 2011, 02:26:09 pm
The answer to my question is probably "You know nothing about how sensors work" but on the slight chance that this is wrong, may I ask this:

Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. As long as you keep count of how many times the reset it done (say by setting a flag) you can calculate the exposure each photosite receives by adding the value of the last reset + the number of resets on your counter. This ought to be capable of dealing effectively with any subject brightness range. And we wouldn't have to worry about shadow noise because the sensor could easily handle 5 stops of overexposure!

Go on, tell me why this is impossible.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: LKaven on January 09, 2011, 01:16:29 am
The answer to my question is probably "You know nothing about how sensors work" but on the slight chance that this is wrong, may I ask this:

Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. As long as you keep count of how many times the reset it done (say by setting a flag) you can calculate the exposure each photosite receives by adding the value of the last reset + the number of resets on your counter. This ought to be capable of dealing effectively with any subject brightness range. And we wouldn't have to worry about shadow noise because the sensor could easily handle 5 stops of overexposure!

Go on, tell me why this is impossible.
I really feel that something /like this/ is coming, and that there are a whole class of dynamic capture methods that could be deployed, including things such as this.

Back-side illuminated sensors afford possibilities for stacking electronics on each photosite without compromising the light-gathering ability of the sensor.  I see the future in more sophisticated local processing on the sensor.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: ErikKaffehr on January 09, 2011, 02:05:39 am
Comments below,

Erik

However CS5 has offered an impressive solution in HDR-2, with its 'Remove Ghosts' feature. This feature must be very useful for Psychics and Spiritualist Mediums who wish they could stop seeing ghosts.  ;D

:-)

Here's a scene of the living room of a friend I'm visiting over the Christmas/New Year break, and crops of the processed HDR images, with and without ghost removal.

Ah, you don't have snow?!

Now I ask you, are these images surrealistic? Untidy, maybe! But surrealistic?... no!

I'm very surprised and very impressed with the ghost removal result in this particular example. In order to reduce the possibility of movement as much as possible, I used ISO 1600 for these shots. Exposures varied from 1/3000th to 1/10th, and the shadows are still noisy. At the base ISO of the D700, the maximum exposure would have been a full second, improving SNR in the dark parts significantly but probably causing too much blur for the 'remove ghosts' feature to handle.

Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on January 09, 2011, 06:00:49 am
Comments below,

Ah, you don't have snow?!

Erik


No, but we have rain. Lots and lots of it. I thank the Lord I don't have to suffer the cold winters of Europe.  ;D
Title: reading photosites part way through exposure
Post by: BJL on January 09, 2011, 11:22:04 pm
Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. ...
I think it could happen. A few ideas similar to this are being tried, but I have only heard of them being used in some security camera sensors.

One method I know of involves checking each well at various times during the exposure (say after 1/2000s, 1/1000s, 1/500s ...) and reading out just the ones that are close to full, using A/D conversion done at each photosite. The output of each photosite is then adjusted for its exposure time (the time sequence above, doubling at each step, means that the adjustment is just a bit shift.) The downside of that is needing an ADC at each photosite, probably limiting sensors to relatively few, large photosites.

Maybe a variant of the old "frame transfer" global shutter CCD approach could be used, to need less ADC's:
- each photosite has a light-masked storage (capacitor) next to the light sensitive area.
- when one of those intermediate scans detects that a well is at least half full, its charge is moved to the masked storage, and the time noted, and maybe a "drain" opened on the light-receiving well to stop further accumulation.
- at the and of the exposure, the signal in each light-masked storage is read, A/D converted, and the level scaled to allow for the different exposure times.
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: hjulenissen on January 10, 2011, 02:26:52 am
To reproduce reality you would need a 3-D monitor or 3-D print for a start.
There may be several aspects of reality reproduction. I dont see why the lack of stereoscopy should be an argument against striving for realistic dynamic range.
Quote
However, the problems of insufficient dynamic range in the reproduction chain has already been solved for static subjects, using exposure bracketing.
Bracketing only solves the capture problem, not the entire reproduction chain.
Quote
Having captured the scene with its full dynamic range, the problem is not the lack of a monitor which can display that full dynamic range, but the lack of skill and technique of image processing in order to compress that captured dynamic range to something that matches the compressed 'field of view' of the print or monitor, and the compressed dynamic range of the eye which is reduced to a 'more or less' fixed gaze when viewing that reproduction.
You are assuming that the monitor covers only a small field of view of the viewer. I dont think that your assumption is generally true. I went to the movies yesterday, and the big screen covered a substantial part of my FOV.
Quote
If one compresses the field of view in the reproduction, as any monitor must do when displaying any scene taken with only a moderately wide lens, it is appropriate in the interests of realism to compress the dynamic range, because the eye, when viewing the reproduction, does not have the opportunity to dilate and contract to the same degree as it did when viewing the original scene.
If that function is needed, it shoud be applied automatically, in the screen (as that is often the only component that has any idea about how large the viewer fov is. Large displays, projectors, or people sitting with their nose up against the monitor/paper should be able to cover close to 180 degrees of their viewer (with some artifacts).
Quote
The 5,000,000:1 contrast ratio of a modern plasma screen should be sufficient, even allowing for a little marketing hyperbole  ;D .
I am sceptic about all marketing.

Plasmas are usually limited to 2 megapixels. That may be an issue for critical applications if the image is to be seen very large.

The black point may be affeced by incident light. In other words, your room may have to be painted black to come nar the quoted DR. Further, I believe that the maximumbrightness is not all that much from plasmas, giving further problems with other light sources, and possibly issues if the absolute brightness of a scene have perceptual relevance.

I have been told that plasmas can produce very black blacks, but that there is a "hole" in the tonal range between the blackest level, and the next blackest. Supposedly this is connected to plasma inherently being PWM-devices of a limited switching speed, and turning a pixel "off" is easy, but turning it "nearly off" means having one bright cycle and many dark cycles, something that cause flickering. If they cannot produce a perceptually uniform gray scale from black to white, then all the DR in the world may not make them good for this application.

-h
Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: Ray on January 16, 2011, 10:02:32 am
There may be several aspects of reality reproduction. I dont see why the lack of stereoscopy should be an argument against striving for realistic dynamic range.

I've been away for a few days due to slight flooding problems.

I would never argue that because one aspect of reality is lacking one should not strive to get other aspects right. My point was, if reproduction of reality is your goal, rather than creating an image to your taste which, although strongly based upon the real scene because it's a photograph, is probably also at least slightly fictitious to some degree with regard to the manner of post-processing, then a 3-D image may go further towards creating that sense of reality, of being there, than an extra couple of stops of DR.

Quote
Bracketing only solves the capture problem, not the entire reproduction chain.

Post processing is required for all images whether they are bracketed or not, unless one allows the camera to do the job. And of course, bracketing doesn't always solve the capture problem if there is movement either in the scene or of the camera.

Quote
You are assuming that the monitor covers only a small field of view of the viewer. I dont think that your assumption is generally true. I went to the movies yesterday, and the big screen covered a substantial part of my FOV.

Yes. It's generally true if one is referring to monitors for image processing. Big screens in the cinema, or big projections on the wall, could hardly be described as monitors for image processing. If the big screen in the cinema were to display the full dynamic range of the real scene in order to reproduce reality, you'd no longer be sitting in a darkened room. It would be like sitting in one's lounge at home looking out of a large window onto the praire, with cowboys and Indians galloping by. Your lounge room would inevitably be very well lit with such a large window.

Quote
If that function is needed, it shoud be applied automatically, in the screen (as that is often the only component that has any idea about how large the viewer fov is. Large displays, projectors, or people sitting with their nose up against the monitor/paper should be able to cover close to 180 degrees of their viewer (with some artifacts).

The monitor can have no idea of the field of view from the viewer's perspective, which is dependent upon the distance between the viewer and the monitor as well as the FoV of the original scene. At the actual scene of a landscape shot, taken with a moderately wide-angle lens, it's necessary to turn one's head to some degree, either to the left and to the right, or up to the sky and down to the foreground, in order to focus clearly on each part of the scene.

When viewing that captured scene on a 24" monitor, or even a 65" TV from an appropriate viewing distance, a slight movement of the eyeballs is all that's required to encompass the entire FoV of the displayed picture. The further you are from the monitor, the less the movement of the eyeballs required.

Quote
Plasmas are usually limited to 2 megapixels. That may be an issue for critical applications if the image is to be seen very large.

Few monitors for image processing boast a higher resolution than 2mp, although I'm thinking of getting a 30" NEC model that claims a resolution of 2560x1600.

The larger the screen, the further away one can view it. The monitor I'm using to write this, is a small 17" model which I'm viewing from a distance of around 2 ft. If I were using my 65" Plasma HDTV as a computer monitor, I'd be viewing it from a distance of 2-3 metres. A 2mp image on a small monitor viewed from a close distance will provide no more detail than the same 2mp image viewed on a larger monitor from an appropriately greater distance.

Quote
The black point may be affeced by incident light. In other words, your room may have to be painted black to come nar the quoted DR.

Absolutely correct! I made this point earlier in the thread. The lower the DR of the monitor or display, the darker the room needs to be in order to produce even a semblance of reality. It's why cinemas are darkened rooms, and it's why you need to darken your living room when using a video projector in place of a TV.

Quote
Further, I believe that the maximum brightness is not all that much from plasmas, giving further problems with other light sources, and possibly issues if the absolute brightness of a scene have perceptual relevance.

In my opinion, the maximum brightness of the plasma screen is totally sufficient. If it were any brighter it would cause eye strain. However, the blacks are more detailed on the plasma screen, provided that the viewing conditions are reasonably suitable.

Still images that have been processed in Photoshop, converted to sRGB, downsized then saved as maximum quality jpegs, look remarkably sharp and vibrant on a 65" plasma screen from a distance of about 10ft or 3 metres. There's no sense of any loss of DR, shadow detail or any loss of detail at all. In fact there's an increased sense of realism compared with a much higher resolution print of the same scene, of the same size, viewed from the same distance.

Quote
I have been told that plasmas can produce very black blacks, but that there is a "hole" in the tonal range between the blackest level, and the next blackest. Supposedly this is connected to plasma inherently being PWM-devices of a limited switching speed, and turning a pixel "off" is easy, but turning it "nearly off" means having one bright cycle and many dark cycles, something that cause flickering. If they cannot produce a perceptually uniform gray scale from black to white, then all the DR in the world may not make them good for this application.

Who cares if there's a hole between the blackest black and the next blackest when you have a contrast ratio of 2 million to one (and now 5 million to one in the latest models)? Also, the refresh rate of these new Panasonic plasmas is 600Hz, which is the smallest number divisible by all the main video and movie frame rates, such as 24Hz, 25Hz, 30Hz, 50Hz and 60Hz. I've never noticed any flicker in any part of the display of any still image. Not even in the deepest shadows.

I see no problem here.

Title: Re: Uwe Steinmuller of DOP on dynamic range and HDR
Post by: PierreVandevenne on January 16, 2011, 08:17:29 pm
Instead of trying to lower noise and other tricks to increase DR, why can't photosites be emptied when they reach saturation and then refilled during a single exposure. Go on, tell me why this is impossible.

Not impossible, but there are a lot of issues. Having one ADC per pixel + additional circuitry is one constraint. There are others at the manufacturing level, circuitry level, blooming control, non homogeneity of the individual pixel ADC, knowing what to do with the photons that arrive while the pixel is read, etc...

Just keep in mind that DR is basically a signal to noise ratio issue: increasing signal or decreasing noise is not a "trick". In fact, what you suggest is increasing signal by increasing well capacity by way of multiple exposures at the pixel level. One of the issues that multiple exposures introduce is multiple read noise. Therefore, the goal of lowering noise remain as important, if not more, as it is in the single exposure scenario. But yes, having a virtual higher well capacity on the high side where read noise matters less could be beneficial in practice because it would be transparent to the photographer (but so would automatic transparent bracketing).

They are doing interesting stuff with sensors though.

From light to light on that one for example.
http://www.freepatentsonline.com/y2010/0091128.html