Pages: 1 ... 3 4 [5] 6 7   Go Down

Author Topic: DR, DxO, DSLR, MFDB, CMOS, CCD  (Read 32172 times)

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #80 on: July 12, 2010, 01:19:08 pm »

Quote from: bjanes
Four to six f/stops of DR is 16 and 64 times respectively, and such an advantage is unlikely from a mere 2.5 x in sensor area.

The following comment is not necessarily related to the "DR difference" between D3X and P65+ per se, but for more general edification. Resolution is a reason for perceived DR difference, as you rightly point out. However, there are other issues that can cause DR difference even on the same technology. For example an important one is the slew rate: the clocking used to read pixels on certain sensors limits the amount of signal change in a read out period, and hence, DR is substantially reduced. There are a few other factors also under the hood but I shall avoid going into them at this point but they can have considerable impact on DR, since I just wanted to point out that resolution difference is not the be all and end all of DR.

Joofa
« Last Edit: July 12, 2010, 01:20:47 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #81 on: July 12, 2010, 01:34:13 pm »

Hi,

In my view:

There is a standard definition of DR and that is (Maximum Signal)/(Signal at SNR=1), this essentially translates to (Well capacity) / (read noise) both measured in electrons normally, where SNR=1 mean signal equals noise. This is based on signal processing theory. Now, it can be argued the image quality at SNR=1 is not very useful photographically, but we need to keep in mind that this would be very dark and probably much compressed in print. A very good reason to keep this definition of DR is that it is the accepted one.

As said before, the lower threshold here will be noisy. We could use a higher criteria, like SNR=16 (signal is 16 times noise), in this region readout noise would not dominate and shot noise would be more important. Shot noise is randomness of light so it is equal for all sensors of same size and quantum efficiency. Quadrupling the size of the sensor would give a factor of two on SNR all other parameters assumed to be constant. This is exactly where the expected "maxium one stop advantage" comes into play. So whatever criteria we choose the ration between a large sensor (MFDB) and a smaller one (DSLR) will be about the same.

In short:

- DR at SNR=1 is the normal definition used in signal processing.

- Would we choose another SNR, it would matter little, because noise will essentially depend on photon statistics.

I'm not an expert on this, just trying to put it simply!

Best regards
Erik


Quote from: fredjeang
In what frame should we all use the term DR, in order to be sure we are taking the same reference point.
In other words, what should we strictly understand by the DR term, and what is accepted by everybody as a relaible mesurement standard ?

If we can not reach a common field on a standard, then, weired differences will show up all the time.

IMO.
« Last Edit: July 12, 2010, 03:19:33 pm by ErikKaffehr »
Logged
Erik Kaffehr
 

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #82 on: July 12, 2010, 01:48:35 pm »

Hi,

This example may demonstrate the need for DR:

[attachment=23101:_DSC5560.jpg]

Full image: http://http://echophoto.dnsalias.net//ekr/...KarerSee_01.jpg

The trees are in shadow and the mountainside has snow illuminated by bright sunlight. Initially I intended to use HDR for the image but found that I could extract about the information I needed from the image having least exposure. Admittedly, it could be done better, the forrest behind the lake is a bit flat.

The image was shot on APS-C, Sony Alpha 100, no camera that really excels in DR...

Best regards
Erik

Quote from: John R Smith
Yes Jeff and Eric

I was somewhat off-topic here, and my apologies. However, I know this debate is all terribly compelling and so on (and we seem to keep having it, in one form or another), but what I don't understand is why it seems to matter so much to all you chaps. I mean, what has it actually got to do with real-world photography? We have always had less DR in the negative and in the print than our eyes can see in the real world. A stop here or there between one film and another or one sensor and another may be really interesting to the anoraks of this world, but once you are out there climbing over hedges and struggling through the brambles, a bit of cloud cover to the north will fill the shadows and make more difference to your DR than the ruddy sensor ever will.

As photographers we spend our time working with and considering the quality of light. Sometimes we can control it, in a studio, sometimes we choose not to and grapple with the light nature bestows upon us. It seems to me that all the DSLRs and MF backs and films made today have plenty of DR for pictorial photography. Unless perhaps you always work in the middle of the Arizona desert at noon under a cloudless sky. A certain amount of dynamic compression is what makes a photograph look like a photograph - it is part of the style and visual language of of photography. Which is perhaps why these HDR images we see now look so horrible.

John
« Last Edit: July 12, 2010, 01:52:09 pm by ErikKaffehr »
Logged
Erik Kaffehr
 

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #83 on: July 12, 2010, 03:25:13 pm »

Quote from: ErikKaffehr
DR at SNR=1 is the normal definition used in signal processing.

Yes, this is the right definition to use. However, in practise both the max signal and the lowest perceptible signal has a relation with extrinsic circuit elements such as voltages applied, clocking, etc.

Quote from: ErikKaffehr
Would we choose another SNR, it would matter little, because noise will essentially depend on photon statistics.

The notion of SNR is not properly defined/interpreted for a single image, IMHO. If the count associated with a certain pixel is 20,000, then what is the noise? Is it sqrt (20,000)? Of course, not. Because, if you knew it was sqrt (20,000), you can just use this fact to calculate the actual signal, with some heuristic on the sign of noise. The issue is that for a given image and given pixel location the count is just a number. From a single number you can't figure out what is the noise. The reason the statistics generated by say, Roger Clark, can get away is that while  the actual "signal" component associated with a count of 20,000 is unknown, it is typically not very far from 20,000, and Poisson statistics can help us here to take 20,000 as the signal, where in actuality it was not. However, this approximation will not work at low signal levels as sqrt becomes increasingly appreciable protion of the signal.

Then what is the meaning of "image SNR"? Well, what happens many times in statistics is that you use area statistics to figure out a point statistics. SNR for a given pixel in a single image (point property) is not known, but lets expand that notion to the area around it and examine the pixel values and try to reason the SNR of a fixed "patch of image" given the values of pixel intensities in neighboring pixels. That shall work if the local neighborhood is known to be kind of uniform, so that even if you can't figure out the "global image SNR" you can have at least some notion on the local SNR - the abuse of such concepts being the so called "shadow SNR", "high-light SNR", and what not.

Quote from: ErikKaffehr
Quadrupling the size of the sensor would give a factor of two on SNR all other parameters assumed to be constant.

I think it is about time we got rid of that myth. Since, that is not applicable to natural images, IMHO.

Joofa
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #84 on: July 12, 2010, 03:36:01 pm »

Hi,

I don't mind to stand corrected, but I tried to answer a question from "fredjeang", as well as could. Would it be possible to give a short answer, decently good enough, that can be understood by a photographer without a junior degree in statistics or signal processing?

Best regards
Erik


Quote from: joofa
Yes, this is the right definition to use. However, in practise both the max signal and the lowest perceptible signal has a relation with extrinsic circuit elements such as voltages applied, clocking, etc.



The notion of SNR is not properly defined/interpreted for a single image, IMHO. If the count associated with a certain pixel is 20,000, then what is the noise? Is it sqrt (20,000)? Of course, not. Because, if you knew it was sqrt (20,000), you can just use this fact to calculate the actual signal, with some heuristic on the sign of noise. The issue is that for a given image and given pixel location the count is just a number. From a single number you can't figure out what is the noise. The reason the statistics generated by say, Roger Clark, can get away is that while  the actual "signal" component associated with a count of 20,000 is unknown, it is typically not very far from 20,000, and Poisson statistics can help us here to take 20,000 as the signal, where in actuality it was not. However, this approximation will not work at low signal levels as sqrt becomes increasingly appreciable protion of the signal.

Then what is the meaning of "image SNR"? Well, what happens many times in statistics is that you use area statistics to figure out a point statistics. SNR for a given pixel in a single image (point property) is not known, but lets expand that notion to the area around it and examine the pixel values and try to reason the SNR of a fixed "patch of image" given the values of pixel intensities in neighboring pixels. That shall work if the local neighborhood is known to be kind of uniform, so that even if you can't figure out the "global image SNR" you can have at least some notion on the local SNR - the abuse of such concepts being the so called "shadow SNR", "high-light SNR", and what not.



I think it is about time we got rid of that myth. Since, that is not applicable to natural images, IMHO.

Joofa
Logged
Erik Kaffehr
 

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #85 on: July 12, 2010, 03:53:10 pm »

Hi Erik,

I think I did provide the short answer and I repeat it below:

Quote from: joofa
The notion of SNR is not properly defined/interpreted for a single image, IMHO.

I.e., it is difficult to put the concept of SNR for a single natural image using the definition you provided unless some extra constraints are put in.

Joofa
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #86 on: July 12, 2010, 04:16:04 pm »

Quote from: joofa
The notion of SNR is not properly defined/interpreted for a single image, IMHO. If the count associated with a certain pixel is 20,000, then what is the noise? Is it sqrt (20,000)? Of course, not. Because, if you knew it was sqrt (20,000), you can just use this fact to calculate the actual signal, with some heuristic on the sign of noise.

Joofa,

Your post makes no sense to me. You do not seem to realize that probability and statistics apply to populations, not individuals. If the frequency of cancer in a certain population is 1 in 20, then the probability that a certain individual has cancer is 5%. However the patient either has cancer or doesn't have cancer.

Regards,

Bill

Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #87 on: July 12, 2010, 04:28:13 pm »

Quote from: bjanes
Joofa,

Your post makes no sense to me. You do not seem to realize that probability and statistics apply to populations, not individuals. If the frequency of cancer in a certain population is 1 in 20, then the probability that a certain individual has cancer is 5%. However the patient either has cancer or doesn't have cancer.

Regards,

Bill

Bill,

First of all in this case there is a notion of an "instantaneous" value of noise, i.e., each pixel count has a certain unknown noise value, which we don't know, but we do know its associated long term statistics. Secondly, I think you have just said what was my point. That the notion of sqrt in Poisson statistics applies to ensembles (populations) and not to specific count in a single image at a given pixel location (individuals).

Haven't we seen people taking sqrt of the count associated with a given pixel in an image as the "noise", even on this forum, and not realizing that the concept of sqrt applies to ensembles and not single numbers- i.e., if I am given a large number of the images of the same scene with same lighting conditions etc., then for a given pixel location I have a sequence of numbers in the temporal domain and terms such as mean/std-dev/etc. have a proper meaning.

For a given pixel in a single image we just have a number: a count. Which is just the sampled valued from an underlying distribution, and from a single number we can't infer any stuff such as underlying mean, std-dev, etc. But people do it all the time. And, they get away with it because for large values of count the sqrt is increasingly smaller proportion of that value of the count. So if we have 20,000 count just take that as the mean value of the underlying Poisson distribution, where as in actuality it may not be 20,000, but close to it, so sqrt is not that off.

Sincerely,

Joofa
« Last Edit: July 12, 2010, 06:14:54 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #88 on: July 12, 2010, 08:17:15 pm »

Quote from: joofa
Bill,

First of all in this case there is a notion of an "instantaneous" value of noise, i.e., each pixel count has a certain unknown noise value, which we don't know, but we do know its associated long term statistics. Secondly, I think you have just said what was my point. That the notion of sqrt in Poisson statistics applies to ensembles (populations) and not to specific count in a single image at a given pixel location (individuals).

Haven't we seen people taking sqrt of the count associated with a given pixel in an image as the "noise", even on this forum, and not realizing that the concept of sqrt applies to ensembles and not single numbers- i.e., if I am given a large number of the images of the same scene with same lighting conditions etc., then for a given pixel location I have a sequence of numbers in the temporal domain and terms such as mean/std-dev/etc. have a proper meaning.

For a given pixel in a single image we just have a number: a count. Which is just the sampled valued from an underlying distribution, and from a single number we can't infer any stuff such as underlying mean, std-dev, etc. But people do it all the time. And, they get away with it because for large values of count the sqrt is increasingly smaller proportion of that value of the count. So if we have 20,000 count just take that as the mean value of the underlying Poisson distribution, where as in actuality it may not be 20,000, but close to it, so sqrt is not that off.


Sincerely,

Joofa

Joofa,

The pixel with a count of 20,000 subtends and represents an area in the subject, and the count represents the luminance of that area during the time of the exposure. The illumination of the subject is subject to random variations following a Poisson distribution, and the sensor count representing this area would vary with repeated exposures even though the subject and exposure parameters are held constant. Shot noise is in the light and exists even before the light hits the sensor. However, the count of 20,000 is a point estimate of the mean illuminance and the square root of 20,000 is an estimate of the standard deviation. You are correct that the derived values are only estimates of these parameters, but as the sample size increases, the accuracy of these estimate increases. I think that these differences are well understood by most observers and you are splitting hairs. Look up Standard Error and Standard Error of the mean.

If the luminance represented by several adjacent pixels is the same (as with a clear blue sky or uniformly illuminated wall), the image pixels would have differing values and this would appear as noise.

Regards,

Bill
 


Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #89 on: July 12, 2010, 09:39:47 pm »

Quote from: bjanes
The pixel with a count of 20,000 subtends and represents an area in the subject, and the count represents the luminance of that area during the time of the exposure. The illumination of the subject is subject to random variations following a Poisson distribution, and the sensor count representing this area would vary with repeated exposures even though the subject and exposure parameters are held constant.

Hi Bill,

What random variations and what repeated exposures? There is a single number here for a given pixel associated with a single exposure resulting in a single image.  If I provide you with a single image and if the first pixel on that image has 20,000 counts, and then I ask you what is the noise value on that pixel then what is the answer? Now if you follow the path where people just take sqrts, then you would be tempted to believe that sqrt (20,000) is the exact value of noise on that pixel, so you have two answers, 20,000 - sqrt (20,000) = 19859, or, 20,000 + sqrt (20000) = 20,141. You can pick either 19859 or 20,141 and say that is the signal. Lets say I arbitrarily consider that 19859 is the right answer and declare victory, and, now this is the pure signal that I have figured out. I have no noise, then why do I worry about SNR now?

Or, you can take the right approach, which is that sqrt just represents a bound on some measure of variation of the pixel value (std. dev.) as you have rightly pointed out resulting from multiple exposures. But here is the catch, I just gave you a single image (single exposure); how do I determine std. dev from just one sample? The fact is that a given pixel in a single image is just one sample, and I can't have a technically-legal notion of SNR with it, at least the usual meaning.

The third option is pragmatic. I know that I have one sample, so I can't calculate std. dev. However, I do know that for relatively high illumination, if I had taken several images of the same scene instead of one, then I would have found that the value of the first pixel in each of them is varying a little about 20,000, but is always close to 20,000. I can now take this sequence and determine its mean and std. dev. and find that the std. dev is close to the sqrt of 20,000. So I have solved my problem, I shall not worry about acquiring several images to determine this std. dev., as an approximation I shall just take the sqrt of a single number, 20,000, which is the pixel count for the first pixel of a single image, and take that as an approximate value of the actual std. dev for noise statistics. However, with this understanding that this number, sqrt (20,000), is not the approximate value of noise on the first pixel in the first image, but an average value of noise measure on that pixel if I had acquired a large number of images. So, I still don't know the exact value of noise on the first pixel in the first image, but I know that it will rarely exceed sqrt (20,000).

Still this approximation will only work for higher illumination and not be good for low light images.

Quote from: bjanes
Shot noise is in the light and exists even before the light hits the sensor. However, the count of 20,000 is a point estimate of the mean illuminance and the square root of 20,000 is an estimate of the standard deviation. You are correct that the derived values are only estimates of these parameters, but as the sample size increases, the accuracy of these estimate increases.

There is no sample size issue here. We have N=1 sample size. A single number 20,000, which is the count associated with the first pixel in the first image. And, since I don't have multiple images so I can't observe a sequence for N >1. Of course, there is an easy way out, instead of temporal direction, if I collect pixel counts spatially, then even in a single image I  have a whole bunch of samples (millions). But here is the catch? Do all of these samples come from the same underlying Poisson distribution? The problem is that for a natural image the answer is no. So I can't take the std. dev. of all pixels in an image and declare that to be the std. dev. of the sequence represented by pixel counts associated with the first pixel of multiple images, if I had acquired more than one images for the same scene.

Quote from: bjanes
I think that these differences are well understood by most observers and you are splitting hairs. Look up Standard Error and Standard Error of the mean.

What splitting hairs? Bill, there is only a single sample here, so the notion of accuracy increase, standard error are not even applicable!

Quote from: bjanes
If the luminance represented by several adjacent pixels is the same (as with a clear blue sky or uniformly illuminated wall), the image pixels would have differing values and this would appear as noise.

Now we are onto something. And I did point out this thing as expanding the notion of a point statistic to an area statistic in an earlier message. However, as I said before, this has only local utility, because for it to extend to the full image would mean that it is almost a flat field, and not a natural, real image we capture using our cameras - images of people, vegetation, cats, dogs, sky, water, anything ....

Sincerely,

Joofa
« Last Edit: July 12, 2010, 11:29:50 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #90 on: July 13, 2010, 04:18:56 am »

Hi,

I don't get your point.

Let's assume that we have a uniform surface evenly illuminated that is imaged on say 100x100 pixels, that is a 10000 pixels. These 10000 pixels are independent in exposure. So we have 10000 exposures with average value of 20000 and assumed Stddev of 141. So Signal is 20000 , noise 141 and SNR = 141. Let's reduce exposure to 200, no we have noise about 14 from photo statistic but we need also to take readout noise into account.

If we assume readout noise to be 10 electrons and add noise in quadrature we get a noise level of 17 electrons so SNR would be 200/17 that is about 12.

Simplicistic?

Best regards
Erik


Quote from: joofa
Hi Bill,

What random variations and what repeated exposures? There is a single number here for a given pixel associated with a single exposure resulting in a single image.  If I provide you with a single image and if the first pixel on that image has 20,000 counts, and then I ask you what is the noise value on that pixel then what is the answer? Now if you follow the path where people just take sqrts, then you would be tempted to believe that sqrt (20,000) is the exact value of noise on that pixel, so you have two answers, 20,000 - sqrt (20,000) = 19859, or, 20,000 + sqrt (20000) = 20,141. You can pick either 19859 or 20,141 and say that is the signal. Lets say I arbitrarily consider that 19859 is the right answer and declare victory, and, now this is the pure signal that I have figured out. I have no noise, then why do I worry about SNR now?

Or, you can take the right approach, which is that sqrt just represents a bound on some measure of variation of the pixel value (std. dev.) as you have rightly pointed out resulting from multiple exposures. But here is the catch, I just gave you a single image (single exposure); how do I determine std. dev from just one sample? The fact is that a given pixel in a single image is just one sample, and I can't have a technically-legal notion of SNR with it, at least the usual meaning.

The third option is pragmatic. I know that I have one sample, so I can't calculate std. dev. However, I do know that for relatively high illumination, if I had taken several images of the same scene instead of one, then I would have found that the value of the first pixel in each of them is varying a little about 20,000, but is always close to 20,000. I can now take this sequence and determine its mean and std. dev. and find that the std. dev is close to the sqrt of 20,000. So I have solved my problem, I shall not worry about acquiring several images to determine this std. dev., as an approximation I shall just take the sqrt of a single number, 20,000, which is the pixel count for the first pixel of a single image, and take that as an approximate value of the actual std. dev for noise statistics. However, with this understanding that this number, sqrt (20,000), is not the approximate value of noise on the first pixel in the first image, but an average value of noise measure on that pixel if I had acquired a large number of images. So, I still don't know the exact value of noise on the first pixel in the first image, but I know that it will rarely exceed sqrt (20,000).

Still this approximation will only work for higher illumination and not be good for low light images.



There is no sample size issue here. We have N=1 sample size. A single number 20,000, which is the count associated with the first pixel in the first image. And, since I don't have multiple images so I can't observe a sequence for N >1. Of course, there is an easy way out, instead of temporal direction, if I collect pixel counts spatially, then even in a single image I  have a whole bunch of samples (millions). But here is the catch? Do all of these samples come from the same underlying Poisson distribution? The problem is that for a natural image the answer is no. So I can't take the std. dev. of all pixels in an image and declare that to be the std. dev. of the sequence represented by pixel counts associated with the first pixel of multiple images, if I had acquired more than one images for the same scene.



What splitting hairs? Bill, there is only a single sample here, so the notion of accuracy increase, standard error are not even applicable!



Now we are onto something. And I did point out this thing as expanding the notion of a point statistic to an area statistic in an earlier message. However, as I said before, this has only local utility, because for it to extend to the full image would mean that it is almost a flat field, and not a natural, real image we capture using our cameras - images of people, vegetation, cats, dogs, sky, water, anything ....

Sincerely,

Joofa
Logged
Erik Kaffehr
 

John R Smith

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1357
  • Still crazy, after all these years
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #91 on: July 13, 2010, 04:40:33 am »

Quote from: JeffKohn
Personally I think there's more to photography than just the soft, filtered light of the magic hour, but that's what most Velvia photographers limited themselves to because they didn't really have a choice with such a contrasty film. And even in those conditions, the scene contrast often required using grad filters, which IMHO look stupid when their use is apparent (which is more often than not).  The 6-stop range of slide film was not a positive, it was something that photographers had to make do with because they wanted the other benefits of slides.

I think all kinds of light can be "good" light depending on the conditions and your subject. Having more DR to handle the contrast in more scenes is a good thing. You can always increase contrast in post if you want, but you can't bring back what was lost so given I choice I'd rather have more DR than less.

You won't get any argument from me about overdone HDR, but that really has nothing to do with camera DR, that's just a matter of bad processing (and maybe poor taste).

Bottom line is that with the D3x, I don't have to bracket multiple exposures as often as with my old cameras, which is definitely a benefit.

Jeff

Thanks for the thoughtful reply. Gosh, did slide film really only have 6 stops of DR? I always seemed to do alright with it provided that I kept the sun over my shoulder, and I never bothered to bracket. But here's an interesting point, which might just be relevant to your topic -

Back then (in the '70s and '80s) I was shooting both 35mm and MF. But it always seemed to me that the MF transparencies had more exposure latitude (and hence DR) than the 35mm. The MF slides certainly seemed less fussy about metering and my percentage of keepers was higher. However, they couldn't have had more DR, in reality, because it was exactly the same emulsion in both cameras (Ektachrome). The picture editors back then always insisted on MF though.

Thinking about the main thrust of this topic, I can't personally believe that my MF digital back has huge amounts more DR than a 35mm DSLR. After all, 4 stops, let alone 6 stops, is a massive difference - you would see it straight away. It would be like using fill-in flash or having a huge reflector set up all the time. The shots from my MF back just look like photographs - the balance between shadows and highlights is pretty much what I would expect from any camera.

What I do notice though, is that these 39 MP MF files seem to have a great deal of latitude in post-processing, which is where the confusion may arise. Not in the highlight areas so much, where it is still easy to blow them and recovery be of no use, but in the shadow areas. It depends on the specific subject and file, of course, but it is possible to push the shadow areas in some cases by +2 EV or more and the result will be amazingly good, at least at 50 and 100 ISO. But this is not strictly DR, is it? DR is measured from an unedited exposure, as I understand it.

John
Logged
Hasselblad 500 C/M, SWC and CFV-39 DB
an

John R Smith

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1357
  • Still crazy, after all these years
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #92 on: July 13, 2010, 05:05:53 am »

Quote from: ErikKaffehr
Hi,

This example may demonstrate the need for DR:

[attachment=23101:_DSC5560.jpg]

Full image: http://http://echophoto.dnsalias.net//ekr/...KarerSee_01.jpg

The trees are in shadow and the mountainside has snow illuminated by bright sunlight. Initially I intended to use HDR for the image but found that I could extract about the information I needed from the image having least exposure. Admittedly, it could be done better, the forrest behind the lake is a bit flat.

The image was shot on APS-C, Sony Alpha 100, no camera that really excels in DR...

Best regards
Erik

Erik

With all due respect, I think this is a silly example. There are obvious limits to what photography can successfuly achieve. If one was a painter, you might set your easel up and do something with this. As a photographer, I would take one look at this subject and think "No way, forget it". If my eyes were unsure, my meter would set me straight. In this case, you don't need more DR, just more commonsense  

John
Logged
Hasselblad 500 C/M, SWC and CFV-39 DB
an

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #93 on: July 13, 2010, 05:22:53 am »

Quote from: John R Smith
In this case, you don't need more DR, just more commonsense  

Or HDR ...  

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #94 on: July 13, 2010, 05:27:56 am »

Quote from: John R Smith
What I do notice though, is that these 39 MP MF files seem to have a great deal of latitude in post-processing, which is where the confusion may arise. Not in the highlight areas so much, where it is still easy to blow them and recovery be of no use, but in the shadow areas. It depends on the specific subject and file, of course, but it is possible to push the shadow areas in some cases by +2 EV or more and the result will be amazingly good, at least at 50 and 100 ISO. But this is not strictly DR, is it? DR is measured from an unedited exposure, as I understand it.

Hi John,

That's correct, it's tonemapping, but having a lot of capture DR helps to avoid the image falling apart at the shadows. It also helps achieving better quality at higher ISOs.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #95 on: July 13, 2010, 05:51:43 am »

Quote from: John R Smith
Back then (in the '70s and '80s) I was shooting both 35mm and MF. But it always seemed to me that the MF transparencies had more exposure latitude (and hence DR) than the 35mm. The MF slides certainly seemed less fussy about metering and my percentage of keepers was higher. However, they couldn't have had more DR, in reality, because it was exactly the same emulsion in both cameras (Ektachrome). The picture editors back then always insisted on MF though.

I don't believe this is the case, John. The larger format will usually tend to deliver higher dynamic range especially when the emulsion or pixel quality is the same. When the emulsion or sensor design is not the same, as in the examples of the D3X and P65+, one having a CMOS sensor and the other a CCD, then anomalies can occur. The additional area of the P65 sensor should allow for a greater DR than the smaller sensor in the D3X, but this is apparently not the case because of differences in technology and design.

The principle here, when comparing identical compositions of course, is that any image detail on the the larger format must result from greater illumination (greater number of photons) than the same detail on the smaller format, whether such detail be in the shadows, mid-tones or highlights. It cannot be otherwise, provided the same exposure is applied in each case. If the same exposure is applied, then each unit of area in both formats, say each square mm of film, must receive the same amount of light to be correctly (or equally) exposed. The greater illumination applied to the same detail (which of course covers a larger area on the larger format emulsion), results in a cleaner and more detailed MF image when both images are compared at the same size on monitor or print.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #96 on: July 13, 2010, 06:24:56 am »

Quote from: ErikKaffehr
Ray,

Thanks for good comments. I would almost expect you pointing out the viewing distance issue. I would add that it's working the way that we don't increase viewing distance proportionally to image size. Think of an IMAX or Omnimax theatre. One of the reasons to print big is that you can see the print at reasonable distance and still feel immersed in the picture.

Hi Erik,
I would put it another way. I'd claim that, more frequently, we do not reduce viewing distance in proportion to the smaller image size of an A4 or A3 or A2 print that may hang on our wall. It's why I bought the 24" wide Epson 7600 a few years ago. It's my experience that, in the average house and sitting room, where there may hang a number of A4 or A3 photos, one does not appreciate the fine detail contained in such prints on most occasions when one happens to glance in the direction of one of those photos. One generally has to make a deliberate effort to get off one's chair and walk right up to the print to view it from a distance of about 1.5x its diagonal.

This is why I prefer to hang small prints in a hallway. I can view them from an appropriately close distance every time I pass them, and on occasions I may linger a few seconds to admire the texture of a rock or tree trunk in the foreground. I would never attempt to place a large panorama in a hallway unless it could withstand close scrutiny without appearing fuzzy.

I remember well the early days of the introduction of the HDTV standard. There were different groups lobbying for different resolution standards. In those early days, large HD screens were horrendously expensive and the group lobbying for the 720p standard had a good point. They would demonstrate that in order to see the difference between 720p and 1080p on a 'then' affordable but still expensive big screen TV of 32" or so, one would have sit no further than a metre away from the screen, which is far closer than most people would want to sit.

However, that point of view of the 720p lobby has proved to be rather short-sighted. Big HD screens are now affordable, which is why I recently bought a 12th generation 65" Panasonic plasma TV. It's better to be stuck on a 1080p standard for a few decades than a 720p standard.

Incidentally, when I sit about 2.5 metres from my 65" Panasonic I get a noticeably 'immersive' experience from a good quality 2mp image. If the image were significantly higher resolution than 2mp, then in order to appreciate that higher resolution, I  would have to sit closer than 2.5 metres and would then have to turn my head from left to right to see the entire image clearly. I think the same principle applies in the cinema. Despite the screen being relatively huge, you still wouldn't want to sit closer than 1.5x the screen diagonal for an immersive experience.

Now to get back to the DR issue. Having more pixels on the same sensor does not help DR capability in any way, as I understand. In fact it probably hinders it slightly because the total read noise can be greater, unless there is some compensating technological development in other areas, and there usually is of course. I imagine if Phase One had produced an MF full frame 24mp back instead of the 60mp P65+ FF DB, the D3X might not have had a DR advantage.

I personally don't have any trouble determining which of my cameras has the higher dynamic range, unless the DR of the cameras I'm testing is very similar. If anyone makes a claim that camera 'A' has 4 to 6 stops greater DR than camera 'B', but is unable or unwilling to demonstrate such differences with visual examples, then I think such claims can be taken with a grain of salt.

A 4 to 6 stop difference in DR, or even half of that difference, should be very easy to demonstrate. One might disagree over a 1/4 stop, or a 1/3rd stop or even 1/2 a stop either way, but 2 stops or more?? No way!!

I can appreciate to a certain extent the objection of the busy professional who has done his research into his need for a particular DB, and who may already own lots of fine MF lenses. If DR is not an issue for him and he has lots of good reasons for using MF equipment, why should he take the time to demonstrate clearly and precisely the DR differences between his DB and any 35mm equipment he happens to own? What's in it for him? To do a thorough job requires care and patience. If one doesn't do a thorough and meticulous job, getting the ETTR exactly right with both cameras, the FoV and lens quality favouring neither one camera nor the other, the focussing and DoF the same, and the lighting and the scene exactly the same in both shots, then one's test will be criticised and considered flawed.

On the other hand, exhortations from such professionals directed at people like us, to persuade us to test the DBs for ourselves and get first-hand experience, are also not practical, unless one is fairly sure beforehand that one is in need of that additional performance and resolution of a DB regardless of the DR issue. Assuming there's a store available that hires out the latest DB's and 35mm gear with appropriate lenses, it would be an expensive exercise that could hardly be justified in order to settle just the DR issue.

If a 4 stop increase in DR (over FF 35mm) were sufficient reason to persuade me (and many others) to go the very expensive MF route, I'm sure the MFDB sales reps would be falling over themselves to demonstrate this DR difference. I wonder why they are not? My guess is because they can't.

Making claims of a 4-6 stop DR advantage then becomes merely a sales ploy to encourage people to visit their nearest MF dealer to check out these extraordinary claims for themselves. Having taken the trouble to do this, and after getting the opportunity to handle a Phase MFDB system and perhaps realise it's not as heavy and cumbersome as one imagined, and having seen close-up examples in the showroom of the marvelous detail that a larger format and high pixel-count camera is capable of, sheer material greed may take over, (I want it. Bugger the DR) and then there's the possibility of a sale. Oops! Have I given the game away?  

Wouldn't I look foolish if I were to spend the price of a new Canon 5D2 body in order to hire a P65+ back, MF camera body with lens, plus a D3X and lens in order to confirm that a bunch of guys with PhDs at DXO Labs were actually right. I'm not that silly, ya know!  
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #97 on: July 13, 2010, 07:32:30 am »

Bart,

The image I posted has actually detail in the foreground but not on the tree trunks. So I'm pretty satisfied with it. I tried to use HDR but had problems with double contours, color fringing and mapping in general. The only advantage I found with HDR was less noise in the foreground.

Best regards
Erik


Quote from: BartvanderWolf
Or HDR ...  

Cheers,
Bart
Logged
Erik Kaffehr
 

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #98 on: July 13, 2010, 09:23:33 am »

Quote from: ErikKaffehr
Hi,

I don't get your point.

Let's assume that we have a uniform surface evenly illuminated that is imaged on say 100x100 pixels, that is a 10000 pixels. These 10000 pixels are independent in exposure. So we have 10000 exposures with average value of 20000 and assumed Stddev of 141. So Signal is 20000 , noise 141 and SNR = 141. Let's reduce exposure to 200, no we have noise about 14 from photo statistic but we need also to take readout noise into account.

If we assume readout noise to be 10 electrons and add noise in quadrature we get a noise level of 17 electrons so SNR would be 200/17 that is about 12.

Simplicistic?

Erik,

A bit simplistic, but it does get the point across. Joofa wants to work with single pixels in a "real" scene. To get the information in a single flat field of 100x100, he would need 10,000 exposures and would wear out his shutter in the process. Even then, the validity of the data would be questionable, since the camera would likely moved a bit on the tripod, and his single pixel would represent another area in the image.

In practice, one would take duplicate exposures of a flat field and use a program such as ImagesPlus or Iris to split out the red, blue, and green channels for separate analysis and work with raw data. Each channel has a different DR and other characteristics. The green channel is most often chosen for analysis, since the eye is more sensitive to green and the Bayer array has 2 green pixels for each red and blue pixel. One would then crop to a representative area near the center, say 200 x 200 pixels, and then subtract one image from other after adding an offset to prevent negative numbers. This removes the effect of nonuniform illumination and the variations in sensitivity among the individual pixels. As Emil has pointed out in his excellent treatise on noise, PRNU (pixel response non-uniformity) is a major contributor to noise, especially at higher exposures. One would then divide the standard deviation of the subtracted channels by the square root of two, since the measured SD represents noise for two exposures. Neglecting read noise, which is minimal at high exposure values, this would give the shot noise. To obtain the noise at various exposures, one needs to take multiple pairs of data by varying the shutter speed.

For Canon cameras that do not clip read noise, RN can be determined by taking a very short exposure with the lens cap on. Nikons clip read noise and more complicated methods are needed. As you mentioned noise adds in quadrature.

See Roger Clark and Peter Facey for details.

This analysis is rather cumbersome, but can be carried out by a serious photographer and I have performed such an analysis for my Nikon D3 after a few missteps. It is an interesting exercise, but the analysis has already been done by professionals at DXO:

[attachment=23126:Phase1_65_.gif]

Regards,

Bill
« Last Edit: July 13, 2010, 09:26:32 am by bjanes »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
DR, DxO, DSLR, MFDB, CMOS, CCD
« Reply #99 on: July 13, 2010, 10:30:05 am »

Quote from: bjanes
This analysis is rather cumbersome, but can be carried out by a serious photographer and I have performed such an analysis for my Nikon D3 after a few missteps. It is an interesting exercise, but the analysis has already been done by professionals at DXO:

Hi Bill,

I agree, it is tedious but one does learn a thing or two about one's specific camera body, and about rigid testing conditions and procedures. I did a similar analysis for my Canon 1Ds3 when there was no reliable info available. My conclusion was and engineering DR of 11.3, DxO recorded 11.22 on their body. The results are in enough of an agreement for me to trust the DxO data.

Cheers,
Bart
« Last Edit: July 13, 2010, 10:33:48 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==
Pages: 1 ... 3 4 [5] 6 7   Go Up