Luminous Landscape Forum

Site & Board Matters => About This Site => Topic started by: dreed on January 28, 2011, 08:44:42 pm

Title: DxO Sensor Mark
Post by: dreed on January 28, 2011, 08:44:42 pm
I find it quite interesting that there are various successor models (GH-2, 30D, etc) that have arrived with a sensor that is inferior to their predecessor. Looking at the data, it's tempting to adopt a position where the purchasing of a new camera (for the pursuit of better quality photographs) is delayed until the relevant data shows up on sites such as DxO's to ensure that image quality doesn't drop.
Title: Re: DxO Sensor Mark
Post by: ErikKaffehr on January 29, 2011, 02:35:18 am
Hi,

There have been some discussion been around GH1 vs. GH2 and DxO data. According to Michael Reichmann the GH2 has better image quality than the GH1, or at least no worse.

DxO says that it would take about five DxO-mark points to tell image quality apart. The GH2 has 50% more pixels which may benefit if you print large or have very fine detail.

I'd suggest that the measurements are quite accurate, but they may not agree with photographers perception of real world image quality.

Check this link:
http://www.dxomark.com/index.php/en/Camera-Sensor/Compare-sensors/(appareil1)/371|0/(appareil2)/640|0/(appareil3)/677|0/(onglet)/0/(brand)/Sony/(brand2)/Leica/(brand3)/Panasonic

I took this example because Michael Reichmann owns all three cameras. Michael ranks the Leica before the Sony. My guess is that Leica's lenses help a lot. They essentially pump more fine edge contrast (MTF) trough the sensor. The higher MTF (sometimes called microcontrast) would enhance shadow detail.

Finally DxO data is before raw processing, a slight increase in denies may hide most differences between GH1 and GH2.

I'd suggest that the DxO-mark data is very interesting but it may not always match real world photographic experience.

Best regards
Erik


I find it quite interesting that there are various successor models (GH-2, 30D, etc) that have arrived with a sensor that is inferior to their predecessor. Looking at the data, it's tempting to adopt a position where the purchasing of a new camera (for the pursuit of better quality photographs) is delayed until the relevant data shows up on sites such as DxO's to ensure that image quality doesn't drop.
Title: Re: DxO Sensor Mark
Post by: wolfnowl on January 29, 2011, 04:36:39 pm
Peter:

A small error you may wish to correct: "This is partly because Canon's two full-frame models (5D Mark II and 1Ds Mark IV) are currently 2 and 3 years old."

And I didn't know the Mark IV had been officially announced!   ;D

Mike.

P.S. Fascinating, article, BTW.
Title: Re: DxO Sensor Mark
Post by: dreed on January 29, 2011, 04:50:09 pm
Erik,

I think you've missed the point of what I was trying to say and that is that despite what camera manufacturers might hope, a newer camera sensor does not always appear to be better. More over, if it falls within the range that DxO can deliver it as being inferior without it being perceived as being inferior then the layman is unlikely to call it out as such. The question this raises is what sort of internal testing do camera manufacturers do when evaluating the performance of a new sensor and what are their goals with bringing new cameras to market? If it was just a problem for the Canon xxD line, I'd perhaps think less of it, but maybe it's not just a Canon problem...

I'm curious home much the brain plays on the subjective analysis of whether or not something is better or worse. If it is not possible to perceive any difference between sensor ability (due to the scores being too close) then are we likely to regard the newer as better because of psychological reasons? (it is *newer*, we've just bought it, therefore it must be at least as good, if not better..)
Title: Re: DxO Sensor Mark
Post by: bradleygibson on January 29, 2011, 05:07:29 pm
My guess is that Leica's lenses help a lot. They essentially pump more fine edge contrast (MTF) trough the sensor. The higher MTF (sometimes called microcontrast) would enhance shadow detail.

My understanding is that DxOMark Sensor score data is obtained without the use of lenses in front of the sensors...  (This ensures the measurements are of sensor performance, not sensor+optics performance.)  They now have DXO Mark scores for sensor + lens scoring.
Title: Re: DxO Sensor Mark
Post by: ErikKaffehr on January 29, 2011, 05:26:26 pm
Hi,

That is a different issue. Let's assume that a subject signal varies between 0 and 50 photons and SNR is 10. A mediocre lens would perhaps transfer 10% contrast at Nyquist limit. So signal would vary between 0 and 5 (10% of 50). Because noise exceeds signal we would see no signal. Let's assume that lens transfers 40% contrast, than signal would vary between 0 and 20 photons and the signal would be clearly visible.

On the other hand, 40% at Nyquist is very high and would lead to fake detail.

The reasoning here is a bit oversimplified, but a lens with good MTF would yield better detail in the shadows even on a sensor that is relatively noisy.

The net result is that a sharp lens on a mediocre sensor may outperform a mediocre lens on a sensor with very little noise.

Best regards
Erik

My understanding is that DxOMark Sensor score data is obtained without the use of lenses in front of the sensors...  (This ensures the measurements are of sensor performance, not sensor+optics performance.)  They now have DXO Mark scores for sensor + lens scoring.

Title: Re: DxO Sensor Mark
Post by: deejjjaaaa on January 29, 2011, 10:37:41 pm
My understanding is that DxOMark Sensor score data is obtained without the use of lenses in front of the sensors...

they actually use some form of optics - just look at the pictures published by DxoMark itself for example

(http://www.dxomark.com/var/ezwebin_site/storage/images/insights/half-cooked-raw/7815-5-eng-US/Half-cooked-RAW.jpg)

(http://www.dxomark.com/itext/tech_testing_protocols/image002.jpg)



now do you think that there is no lens mounted and somehow that camera is tested w/ open sensor ?

now they might be using some custom made fix w/ registration distance allowing the same lens to be used to test all cameras

just read the article = http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/DxOMark-testing-protocols/Testing-lab = it is full of photos showing the actual process and you can see cameras being tested w/ some lens mounted... just like they can't test sensors without them actually being in the cameras and with  firmware producing the raw files they can't test 'em  without some optics based on what kind of targets we can see they are actually using... the only way to really test w/o optics is to illuminate the whole sensor (making sure that reflections from anything in the mirror box are eliminated or greatly reduced) with either reflected or emitted light w/ some specific parameters, one at a time and repeat that through the set of light parameters - but this is clearly not the case based on the photos we can  see
Title: Re: DxO Sensor Mark
Post by: deejjjaaaa on January 29, 2011, 10:44:50 pm
The GH2 has 50% more pixels which may benefit if you print large or have very fine detail.

where did you get 50% more pixels ? 12mp (4000 x 3000) vs 16mp (4608 x 3456) in 4:3 mode is not 50%
Title: Re: DxO Sensor Mark
Post by: bradleygibson on January 29, 2011, 11:47:50 pm
they actually use some form of optics - just look at the pictures published by DxoMark itself for example

Clearly, my understanding was wrong.

now they might be using some custom made fix w/ registration distance allowing the same lens to be used to test all cameras

A possibility, but as you point out, that doesn't seem to be the case for the Sensor tests that they show the setup for...

Still, when I read (emphasis mine):

"All sensor scores reflect only the RAW sensor performance of a camera body... DxOMark does not address such other important criteria as image signal processing, mechanical robustness, ease of use, flexibility, optics quality, value for money, etc."

from their site (http://www.dxomark.com/index.php/en/Learn-more/DxOMark-scores/Sensor-scores), I am at a loss on how to reconcile how they are "not addressing optics quality" while at the same time appear to be using optics on their sensor tests.

Anyone with some insight into this?

Thank you,
Title: Re: DxO Sensor Mark
Post by: ErikKaffehr on January 30, 2011, 01:24:43 am
Hi,

It's very simple. DxO-mark (sensor) is about sensor noise and color characteristics. So all they do is to project different color or grey patches on the sensor and measure the response. It is probably practical to use a lens or some kind optical device to project the patch on the sensor.

The lens testing is entirely a different matter. A special test target is photographed and analyzed by software to determine resolution, MTF, vignetting, distortion and lateral chromatic aberration.

Best regards
Erik


Clearly, my understanding was wrong.

A possibility, but as you point out, that doesn't seem to be the case for the Sensor tests that they show the setup for...

Still, when I read (emphasis mine):

"All sensor scores reflect only the RAW sensor performance of a camera body... DxOMark does not address such other important criteria as image signal processing, mechanical robustness, ease of use, flexibility, optics quality, value for money, etc."

from their site (http://www.dxomark.com/index.php/en/Learn-more/DxOMark-scores/Sensor-scores), I am at a loss on how to reconcile how they are "not addressing optics quality" while at the same time appear to be using optics on their sensor tests.

Anyone with some insight into this?

Thank you,
Title: Re: DxO Sensor Mark
Post by: bradleygibson on January 30, 2011, 02:34:21 am
Yes, of course the lens + sensor scores require optics, but the OP mentioned DxO Sensor Mark measurements, which I was under the impression was performed without optics.

The link you provided seems to show sensor measurements, for example, dynamic range and SnR calculations being performed with optics (which is apparently in contradiction to the quote from DxO I posted above).  
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 30, 2011, 03:55:46 am
A small error you may wish to correct: "This is partly because Canon's two full-frame models (5D Mark II and 1Ds Mark IV) are currently 2 and 3 years old." [..]

Mike.

P.S. Fascinating, article, BTW.
Oops / thanks / fixed / thanks  :)
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 30, 2011, 04:40:11 am
Erik,

[..] a newer camera sensor does not always appear to be better. More over, if it falls within the range that DxO can deliver it as being inferior without it being perceived as being inferior then the layman is unlikely to call it out as such. The question this raises is what sort of internal testing do camera manufacturers do when evaluating the performance of a new sensor and what are their goals with bringing new cameras to market? If it was just a problem for the Canon xxD line, I'd perhaps think less of it, but maybe it's not just a Canon problem... [..]

You ask about models that are (presumably accidentally) a step back compared to their predecessor. You could widen this to "models that are not state-of-the art in terms of noise" (thus including competition: recent models with similar sensor sizes).  As far as I can tell things are getting quite professional in this area:
I suspect the industry is getting more mature about this level of benchmarking than say in the Canon 30D days (5 years ago): back then a good price for a solid camera with competitive resolution was important. Nowadays attention has partly shifted from MPixels to image quality. I believe that independent, repeatable, and more or less scientific benchmarks can accelerate this trend: it shifts the focus of manufacturers and users a bit by giving a clear yardstick.

But the achiles heel of "hard" benchmark numbers is always that they don't cover everything you may care about. You still need at least 3 yardsticks - just for image quality alone: noise data (e.g. DxOMark Sensor), resolution (sensor MPixels), and the impact of lens quality (the DxOMark Score - terrible name, or DPReview/Photozone.de/LuLa/etc, or your own quick-and-dirty tests). Peter
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 30, 2011, 05:29:12 am
Yes, of course the lens + sensor scores require optics, but the OP mentioned DxO Sensor Mark measurements, which I was under the impression was performed without optics.

My understanding is also (like ErikKaffehr and deejjjaaaa) that sensor noise measurements are done by imaging back-lit neutral density filters using lenses (DxOMark-testing-protocols (http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/DxOMark-testing-protocols)). All you want to do is illuminate a large part of the sensor with the same intensity. That won't work by pointing a bare camera body at a wall: you will get more vignetting via the mount and mirror box than you would with a lens. So you either put a lens on the camera or make your own collimating optics.

But DxOMark Sensor requires a different set of tests than used for Sensor-and-Lens ("DxOMark Score") testing. But all require optics. But the lens sharpness has negligible impact: you could arguably defocus the lens slightly and get the same results as long as you stay clear of the edges of the test patches. A pinhole would also (kind of) work ;-)

Engineers and scientists will point out that numerous questions remain about test details for any precision measurement: e.g. light source homogeneity, light source stability, finite test patch size, vignetting, dust on the source, dust on the optics. I can assure you (I worked for years in labs) that precisions measurements are a major headache. Some of these issues are nowadays covered by international standards where the experts jointly develop measurement protocols. DxOMark is active in some of these committees (source: LinkedIn and private communications). And DxO says that outside engineers regularly get to see the setup and discuss the procedures used. This is normal in engineering: if you challenge my measurement results, I either need to exhaustively document measurement details and you review them, and/or you send in experts to see if you can find a flaw in the measurments. You can bet that a major manufacturer will contact DxOMark whenever their products get lower scores than hoped for.

[fixed typo on 11-2-11, a historic day for other reasons]
Title: Re: DxO Sensor Mark
Post by: qwz on January 30, 2011, 08:02:28 am
DXO's Noise and dynamic range may be fairly correct (but i think the approach of Alex Tutubalin - developer of LibRaw for example is more accurate cause analysis of this aspects via Per Channel basis and takes in aacount even per-сolumn black level subtraction, different to various sensors and usually not published by vendors)

According to this
http://www.dxomark.com/index.php/en/Learn-more/DxOMark-database/DxOMark-testing-protocols/Color-sensitivity
DXO color sensitivity is suitable ONLY for measurement of pre-raw (inside sensor and imaging processor) color noise reduction algorithms and not says anything about real-world, not Gretag target.

Сonsequently, digital backs or even some dslrs with CCD sensor (or some CMOS like sony 900|850) have really bad rating, not useful to evaluate its image quality ('cause photojournalists low-light work is not thing digital backs is designed for).




Title: Re: DxO Sensor Mark
Post by: barryfitzgerald on January 30, 2011, 12:23:00 pm
I'm sure tech heads love all the numbers but the reality of things is that many of us merely care not what a test says but what happens for field use. And looking at some of their results I have to say their DR numbers simply do not make any sense nor do they reflect real world use or results.

This is the problem when you go on numbers alone DxO is a bit of fun no doubt of some use to some people but it's completely inadequate as a tool looking to influence a buying choice. It's a bit like the computer benchmarking sites some very nice details tests and ok as a rough guide but do you really care if you squeeze an extra few seconds on the photoshop benchmark? Probably not. I'd suggest using a camera..and looking at the real output of it before crunching numbers.
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 30, 2011, 03:28:34 pm
DXO color sensitivity is suitable ONLY for measurement of pre-raw (inside sensor and imaging processor) color noise reduction algorithms and not says anything about real-world, not Gretag target.

Сonsequently, digital backs or even some dslrs with CCD sensor (or some CMOS like sony 900|850) have really bad rating, not useful to evaluate its image quality ('cause photojournalists low-light work is not thing digital backs is designed for).

I don't quite get your point. My article doesn't provide a reason why medium-format cameras (or digital backs) don't score too well. I don't have that answer, so I am certainly interested in understanding this.

Let's break down the discussion into elements to see what precisely you don't agree with:
I am not sure about the last bullet point (not sure DxO does this, not sure this is what they "should" do).

Peter
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 30, 2011, 04:00:13 pm
looking at some of their results I have to say their DR numbers simply do not make any sense nor do they reflect real world use or results.
[..] I'd suggest using a camera..and looking at the real output of it before crunching numbers.

I believe DxOMark uses a definition of Dynamic Range that results in higher absolute numbers than most other definitions (e.g. clarkvision.com). Their definition is summarized in the article.

Comparing dynamic range data that has been measured in different ways, is indeed not very helpful. So I would just look at relative rankings (per data source) and approximate DR differences (per data source). Comparing DxOMark DR differences to DR differences measured/estimated elsewhere might work reasonably well.

Hope this helps, Peter
Title: Re: DxO Sensor Mark
Post by: ErikKaffehr on January 30, 2011, 06:20:23 pm
Hi,

Regarding DxO I like the measurements but dislike the score. That applies to both DxO-mark sensor and DxO-mark lens.

Best regards
Erik

But the achiles heel of "hard" benchmark numbers is always that they don't cover everything you may care about. You still need at least 3 yardsticks - just for image quality alone: noise data (e.g. DxOMark Sensor), resolution (sensor MPixels), and the impact of lens quality (the DxOMark Score - terrible name, or DPReview/Photozone.de/LuLa/etc, or your own quick-and-dirty tests). Peter
Title: Underlying 'properties'
Post by: Sekoya on January 30, 2011, 06:23:36 pm
Peter,
the DxO results have widely been dissected to extract three basic underlying sensor properties: quantum efficiency, full well capacity, and read noise. I can understand that you might have wanted to limit the scope of your article and not discuss them there. But maybe you comment here whether you agree with the common approach taken to extract these values.
I find it particularly interesting to see progress over time, sensor size, and sensor designer for these three properties. As I understand them, quantum efficiency (which somehow includes the fill factor) helps with low light noise but does not really affect dynamic range, full well capacity (naturally scaled for sensel size) affects dynamic range but not low light noise and read noise affects both dynamic range and low light performance.
What I am struggling a bit with is the relationship between read noise and noise in the amplification and the A/D converter. It is said that for cameras which do not have decreasing DR when going from minimum to moderate ISO, the DR is limited by the noise in the A/D converter and not the read noise at the sensel. It is also said that the latest Sony sensors have such a great DR performance (and that somewhat includes the D3x sensor) because of the column ADC which have very low noise levels.
If you have any insight on these issues, I would be very glad to here it.

Sekoya
Title: Re: DxO Sensor Mark
Post by: douglasf13 on January 30, 2011, 06:46:15 pm
Hi, Peter. Thanks for the article.

I wanted to mention that, the Canon S90, which is mentioned in footnote [27,] actually uses a Sony Exmor R sensor, so Sony incidentally has the best sensor score in compacts, as well.
Title: Re: DxO Sensor Mark
Post by: deejjjaaaa on January 30, 2011, 07:36:00 pm
but DxOMark is benchmarking sensors
they are benchmarking raw files (postfirmware, not prefirmware), not sensors exactly...
Title: Re: DxO Sensor Mark
Post by: charleski on January 30, 2011, 07:43:10 pm
There have been some discussion been around GH1 vs. GH2 and DxO data. According to Michael Reichmann the GH2 has better image quality than the GH1, or at least no worse.

DxO says that it would take about five DxO-mark points to tell image quality apart.

I've learnt to take the DxOMark data with a hefty pinch of salt. There are some review sites (eg DPReview) that provide RAW files taken under reasonably well-controlled conditions from the cameras they cover. Download some of them and see if they reflect the differences in noise indicated by the score.

You'd think that a score difference of almost 3 times the threshold they give would be fairly obvious. You'd expect that a camera that scored higher would not appear to have greater noise. Unfortunately, neither have been my experience. DxO's measurements certainly do seem impressive, but the results fail to correlate with what I see, and sometimes the failure is dramatic.

It's clearly a worthy endeavour on their part, but somewhere along the line something has gone wrong. I suspect that the problem really lies in the analysis and that the current models used for this aren't really up to the job. The DxOMark score should certainly not be used to assess a possible purchase (even on just the basis of sensor quality) without substantial cross-referencing with other comparative reviews.
Title: Re: DxO Sensor Mark
Post by: deejjjaaaa on January 30, 2011, 07:49:33 pm
Unfortunately, neither have been my experience.

so post couple of raw files and describe your procedure to illustrate... huh ?
Title: Re: Underlying 'properties'
Post by: ErikKaffehr on January 31, 2011, 12:53:28 am
Hi,

The Sony new Sony sensors have AD converters on chip and it is my understanding that they have an AD-converter on each column. Something like 6000+ converters on full frame (D3X, Alpha 900, 850).

My guess is that the converters are ramp type. Having a lot of parallel converters gives much longer measurement time for each individual converter.

That may also explain while the D3X is so much better than the Alphas. The DX-3 has a 14 bit pipeline while the Sony has only 12 bits. Nikon may use a longer integration time and achieve 14 bits with the same technology that Sony uses. Nikon D3X has a 14-bit mode which is much slower than 12-bit mode. That may be consistent with this theory.

Nikon D3 had external AD converters.

Best regards
Erik


Peter,
the DxO results have widely been dissected to extract three basic underlying sensor properties: quantum efficiency, full well capacity, and read noise. I can understand that you might have wanted to limit the scope of your article and not discuss them there. But maybe you comment here whether you agree with the common approach taken to extract these values.
I find it particularly interesting to see progress over time, sensor size, and sensor designer for these three properties. As I understand them, quantum efficiency (which somehow includes the fill factor) helps with low light noise but does not really affect dynamic range, full well capacity (naturally scaled for sensel size) affects dynamic range but not low light noise and read noise affects both dynamic range and low light performance.
What I am struggling a bit with is the relationship between read noise and noise in the amplification and the A/D converter. It is said that for cameras which do not have decreasing DR when going from minimum to moderate ISO, the DR is limited by the noise in the A/D converter and not the read noise at the sensel. It is also said that the latest Sony sensors have such a great DR performance (and that somewhat includes the D3x sensor) because of the column ADC which have very low noise levels.
If you have any insight on these issues, I would be very glad to here it.

Sekoya
Title: Re: Underlying 'properties'
Post by: Peter van den Hamer on January 31, 2011, 03:55:56 am
the DxO results have widely been dissected to extract three basic underlying sensor properties: quantum efficiency, full well capacity, and read noise. I can understand that you might have wanted to limit the scope of your article and not discuss them there. But maybe you comment here whether you agree with the common approach taken to extract these values.

I am fine with this. I have some material on my website that looks at sensor noise bottom-up rather than top-down. I used the bottom-up info (from HarvestImaging.com aka Prof. Albert Theuwissen) as a check for the top-down data (DxOMark Sensor). See http://peter.vdhamer.com/2010/12/25/havest-imaging-ptc-serie/ (http://peter.vdhamer.com/2010/12/25/havest-imaging-ptc-serie/). The next major refinement to 'your' 3-param model might be temporal noise versus fixed-pattern noise. The PTC model as described by HavestImaging has about 10 parameters (depending on how you count, see my tables) and has example values for all elements. The reason they have more params is that they try to distinguish between row-noise, column-noise and pixel-noise (especially for fixed-pattern noise).

What I am struggling a bit with is the relationship between read noise and noise in the amplification and the A/D converter. It is said that for cameras which do not have decreasing DR when going from minimum to moderate ISO, the DR is limited by the noise in the A/D converter and not the read noise at the sensel. It is also said that the latest Sony sensors have such a great DR performance (and that somewhat includes the D3x sensor) because of the column ADC which have very low noise levels.

At this level of detail, I would cluster all the noise added to any sensel (regardless of the light level) as what you call read noise or my "noise floor". There is a pipeline of transformations/processing, and on the outside it is hard to distinguish which stage provides how much (temporal) noise. So I would include amplifier noise ("LNA noise"), quantisation noise, rounding errors, A/D noise, etc in read noise. But this kind of discussion about definitions and distinguishability require a model. The Havest Imaging model might help (I summarize it on my website), although it is a bit more detailed than what you might be looking for.

If you want to comment on this bottum-up stuff, you use the referenced posting on my site or even (if you get very detailed, might require background in Engineering ;-) comment on the originals posting series on HarvestImaging.com. Luminious Landscape is more about the end results and how to use them. Half of my DxOMark Sensor "essay" is already pretty far out for Luminous Landscape.
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 31, 2011, 04:29:15 am
The Sony new Sony sensors have AD converters on chip and it is my understanding that they have an AD-converter on each column. Something like 6000+ converters on full frame (D3X, Alpha 900, 850).

My guess is that the converters are ramp type. Having a lot of parallel converters gives much longer measurement time for each individual converter.

That may also explain while the D3X is so much better than the Alphas. The DX-3 has a 14 bit pipeline while the Sony has only 12 bits. Nikon may use a longer integration time and achieve 14 bits with the same technology that Sony uses. Nikon D3X has a 14-bit mode which is much slower than 12-bit mode. That may be consistent with this theory.

It sounds pretty straight forward that at least 14-bit are needed now that the best cameras supposedly reach 14 stops of Dynamic Range. Off-chip ADCs sounds flimsy to me as it means transporting pretty high frequency analog signals off-chip: 25 Msamples in 0.1 s translating to a few GHz of analog bandwidth to get the resolution. I don't know whether column-level ADC is really the new magic ingredient. It does give a lot more time for conversion. But on my own Canon 5D2 body, it seems that fixed pattern noise is the main source of low-light noise. So my suspicion was that sensors were able to handle 14b ADC at 250 Msamples/s (maybe by having 4 or 8 // ADCs on-chip and 4 or 8 digital channels) and that recent innovations might be about efficient ways to subtract fixed-pattern row and column noise. But that is just a guess: DxOMark doesn't distinguish between temporal and FPN. And I haven't look up what the state-of-the art is in high-speed high-res ADCs.

See also http://harvestimaging.com/blog/?p=604&cpage=1#comment-11175 (http://harvestimaging.com/blog/?p=604&cpage=1#comment-11175) and my posting on peter.vdhamer.com about that tutorial by Albert Theuwissen.
Title: Re: DxO Sensor Mark
Post by: qwz on January 31, 2011, 11:42:23 am
Peter van den Hamer
Quote
I don't quite get your point. My article doesn't provide a reason why medium-format cameras (or digital backs) don't score too well.
Yes but IF science theory have too many  exception - may be we have something wrong with theory AND/OR with experimental data and methods we collect it.

Тevertheless i appreciate much your detailed explanation in this article on LuLa and this forum thread.

Quote
The GretagMacbeth color chart is a workable choice for such a "handful of colors". Agree?
Disagree. I don't clearly understand usefulness of this "handul colors" 'cause it suited only for photographer who shoot gretag macbeth target, 'cause all printed colors gamut (even with best printed technologies we have now) is much much tighter than real world. Also we have flat patches and test chart shot says nothing about subtle chromatic definitions between different lengths of light-waves.

Origin of skepticism is located in practical experience - i saw many files from cameras listed in dxo's rating on colour accuracy bad - it has better colors - especially subtle tones of colour than files from cameras with better dxo'x rating in this.

And i see, for practical purposes, dxo's rating says anything meaning only for shooting black and white photojournalist style.
Or for selling in photostocks, where technicians and mad for silky-smooth textures and pop poster colors (but you can always do smooth and pop - simply killing grain, detail and subtones and cannot make vive-versa in post-processing).

(it remembers me still debated topics about fuji vs kodak slide film quality - because lesser grain and better MTF doesn't automatically count as better image and bigger saturation doesn't always mean better colours tones - varieties of subtle chroma variations - things which makes image volume and authentic)

I almost  agree with other your points.
Title: Re: DxO Sensor Mark
Post by: qwz on January 31, 2011, 11:49:54 am
ErikKaffehr
Sony's chips for A700 and A900 (D300(s) and D3x in Nikon realisation) have possibility for multi-sampling reading to achive 14 or even 16bit ouput.  (8fps for 12 bit, 2.5fps for 14bit and about 0.7 fps for 16bit)
(similar to multisampling in scanners)

Sony don't use this but nikon do - because this 300 and 3x cameras slow down significantly in 14bit mode.

(I'll prefer it Sony have this toom but Sony decided to not).
 
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 31, 2011, 12:38:02 pm
Origin of skepticism is located in practical experience - i saw many files from cameras listed in dxo's rating on colour accuracy bad - it has better colors - especially subtle tones of colour than files from cameras with better dxo'x rating in this.

Qwz: Can you first clearly explain what you believe DxOMark Sensor is trying to measure with "Color Sensitivity" (or Color Depth or Portrait Score)? Never mind HOW they do it. WHAT are they trying to measure here?

If that is clear, you can meaningfully argue that that is not what you or others want to know. Or you can argue that it is measured in the wrong way. Peter

Title: Re: Underlying 'properties'
Post by: Sekoya on January 31, 2011, 03:44:33 pm
I am fine with this. I have some material on my website that looks at sensor noise bottom-up rather than top-down. I used the bottom-up info (from HarvestImaging.com aka Prof. Albert Theuwissen) as a check for the top-down data (DxOMark Sensor). See http://peter.vdhamer.com/2010/12/25/havest-imaging-ptc-serie/ (http://peter.vdhamer.com/2010/12/25/havest-imaging-ptc-serie/). The next major refinement to 'your' 3-param model might be temporal noise versus fixed-pattern noise. The PTC model as described by HavestImaging has about 10 parameters (depending on how you count, see my tables) and has example values for all elements. The reason they have more params is that they try to distinguish between row-noise, column-noise and pixel-noise (especially for fixed-pattern noise).
Thanks, I'll have a look at it.
Title: Re: DxO Sensor Mark
Post by: Sekoya on January 31, 2011, 03:52:05 pm
Origin of skepticism is located in practical experience - i saw many files from cameras listed in dxo's rating on colour accuracy bad - it has better colors - especially subtle tones of colour than files from cameras with better dxo'x rating in this.
DxO is not measuring the colour accuracy, they are measuring how much colour variation you get on a flat field of exactly the same colour. A sensor (+ the raw conversion) can make a brown out of a red but as long as uniform red is rendered as a uniform brown the sensor is not adding noise in the form of colour variations.
Colour accuracy can really only be tested by combining the sensor and the raw converter. It is the raw converter which (tries to) ensure(s) that red stays red. And Capture One, the raw converter by and for Phase One, is known to be a rather accurate converter (supposedly the use more different colour sample for the calibration than other raw converters).
Title: Re: DxO Sensor Mark
Post by: deejjjaaaa on January 31, 2011, 04:14:07 pm
DxO is not measuring the colour accuracy
DxO also makes a raw converter
Title: Re: DxO Sensor Mark
Post by: PierreVandevenne on January 31, 2011, 05:27:48 pm
Interesting article indeed. But your summary http://peter.vdhamer.com/2010/12/25/havest-imaging-ptc-serie/ (and obviously its source material) is really great! Thanks.

Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 31, 2011, 05:48:44 pm
DxO is not measuring the colour accuracy, they are measuring how much colour variation you get on a flat field of exactly the same colour. A sensor (+ the raw conversion) can make a brown out of a red but as long as uniform red is rendered as a uniform brown the sensor is not adding noise in the form of colour variations.
Colour accuracy can really only be tested by combining the sensor and the raw converter. [..]
DxO also makes a raw converter

True and true. Unfortunately raw converters are nowadays complex pieces of software (e.g. Adobe Camera Raw) integrated into even fancier programs (Lightroom, DxO Optics Pro). It is, in my opinion, best to avoid all that in order to decode the raw file in a well-defined way.
I tried guessing (earlier in this thread) what a bare-bones raw converter would do - especially if it should have minimal impact on noise, sharpness, contrast, and color:
" It would be fair to use a familiar demosaicing algorithm, no additional noise reduction, and the manufacturer's color matrix to convert from the raw color space to a standard (e.g. sRGB) color space. And then to analyze the measured noise values. "

That would just be a linear (3x3 matrix) transformation on the Raw files RGB values to handle the transmission spectra of the color filters. Serious color management software could try a lot harder (say XRite's Color Checker Passport, or Lightroom/ACR's built-in default camera profiles) based on measured color response and non-linear corrections. These are similar to what ICC profiles for scanners/monitors/printers do. Hints about the color matrix story can be found in the DxOMark "Insight" article where they compare the color sensitivity of the Nikon D5000 to the Canon EOS 500D.
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on January 31, 2011, 06:36:12 pm
FYI: I just updated the diagrams.

The differences are minor:
In the process I messed up Fig 3 (timeline) a bit because my script automatically rescaled the axis due to a new model. Will fix this later.

Peter
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on February 01, 2011, 04:27:03 am
Interesting article indeed. But your summary http://peter.vdhamer.com/2010/12/25/havest-imaging-ptc-serie/ (and obviously its source material) is really great! Thanks.

Thanks Pierre. Most of the credit indeed goes to Albert Theuwissen for the source material. I just deserve an aluminum medal for reading through all of it.

I changed the URL to http://peter.vdhamer.com/2010/12/25/harvest-imaging-ptc-series/ (http://peter.vdhamer.com/2010/12/25/harvest-imaging-ptc-series/). But the old URL still works (due to some automagic WordPress redirection). Note that in January 2011 Albert has started a follow-up on his series. I don't know yet what I will do with that: extend the original posting, start a new posting, or just leave things as they are. Will ask what Albert prefers - my posting does get some traffic (despite that my own DxOMark posting doesn't link to it yet, one was written before the other).
Title: Re: DxO Sensor Mark
Post by: ejmartin on February 01, 2011, 05:07:02 pm
Can you first clearly explain what you believe DxOMark Sensor is trying to measure with "Color Sensitivity" (or Color Depth or Portrait Score)? Never mind HOW they do it. WHAT are they trying to measure here?


It is the number of distinguishable colors in the raw data output:

http://dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Color-sensitivity

basically the number of ellipsoids the size of a noise std dev deltaR*deltaG*deltaB that fit within the 'gamut' of the camera (the overall range of R,G,B it can record).

So it is a measure of color richness.   As a practical matter, it should be combined with information from the metamerism index (the degree to which the subspace spanned by the CFA spectral responses overlaps with that of the CIE standard observer) as well as the information that DxO measures on the map from color primaries of the camera to sRGB primaries.  For instance if there are large coefficients in the latter one will have larger chroma noise when mapped to standard output color spaces; see

http://dxomark.com/index.php/en/Our-publications/DxOMark-Insights/Canon-500D-T1i-vs.-Nikon-D5000/Color-blindness-sensor-quality

Basically large coefficients in the color matrix lead to amplification of chroma noise. 
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on February 02, 2011, 08:58:37 am
[Color-sensitivity] is basically the number of ellipsoids the size of a noise std dev deltaR*deltaG*deltaB that fit within the 'gamut' of the camera (the overall range of R,G,B it can record).

Thanks. Actually the question was intended to get user <qwg> onto the same page. He was confusing color profile accuracy with DxO's color sensitivity because both happen to involve GretagMacbeth-like color charts.

As you sound more knowledgeable on this than I am, I have a real question (it is mentioned towards the end of the article): Why would DxO measure color sensitivity at low ISO rather than at high ISO? Wouldn't chroma noise normally be imperceptible at low ISO and thus irrelevant? Obviously the results at high ISO will tend to scale with the results at low ISO, but DxO explicitly states that they measure "maximum" color sensitivity.
Title: Re: DxO Sensor Mark
Post by: ejmartin on February 02, 2011, 10:08:30 am
Why would DxO measure color sensitivity at low ISO rather than at high ISO? Wouldn't chroma noise normally be imperceptible at low ISO and thus irrelevant? Obviously the results at high ISO will tend to scale with the results at low ISO, but DxO explicitly states that they measure "maximum" color sensitivity.

I think it is more intended to measure the number of distinguishable colors the camera can record, and so not per se a measure of chroma noise.  More like a chroma dynamic range, if you will, but not a range because it's a volume measure -- how many physically distinguishable bins of color there are in the 3D domain of (R,G,B) values.  So one would want to measure that at the lowest ISO where the number of bins is the largest.

I don't think there is really a useful quantitative measure of chroma noise for raw data.  One doesn't really have chroma data until after demosaic and transform to a color space, and that depends on a number of other factors (demosaic algorithm, transform method and input profile used, for example). For that matter, the noise in the raw data is not really luminance noise either.  It's just noise in the raw data.
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on February 02, 2011, 05:39:24 pm
I don't think there is really a useful quantitative measure of chroma noise for raw data.  One doesn't really have chroma data until after demosaic and transform to a color space, and that depends on a number of other factors (demosaic algorithm, transform method and input profile used, for example).

Why do you think that the color sensitivity metric is measured before demosaicing and transform to a standard color space? The article you quoted on color sensitivity and color filter response curves suggested to me that the numbers are computed in a standard color space like sRGB. Why else would an ill-conditioned 3x3 color transformation matrix decrease the color sensitivity score (as in the Canon 500D) if the number of discernable colors is estimated in the color space of the raw file?

Peter
Title: Re: DxO Sensor Mark
Post by: ejmartin on February 02, 2011, 10:24:39 pm
Why do you think that the color sensitivity metric is measured before demosaicing and transform to a standard color space? The article you quoted on color sensitivity and color filter response curves suggested to me that the numbers are computed in a standard color space like sRGB. Why else would an ill-conditioned 3x3 color transformation matrix decrease the color sensitivity score (as in the Canon 500D) if the number of discernable colors is estimated in the color space of the raw file?

Peter

Looking at the DxO documentation, it's not at all clear what they are measuring, but you are right -- the score could only depend on the matrix transform if they were measuring color sensitivity within the sRGB gamut.  However, given the way they do all their other measurements, I suspect the result is calculated from the raw data rather than via demosaic etc.  From the error ellipsoid in the raw data, one can transform it to an error ellipsoid in a standard color space and determine the number of such ellipsoids that fit inside sRGB gamut.
Title: Re: DxOMark Sensor & Color Sensitity
Post by: Peter van den Hamer on February 03, 2011, 05:11:50 am
Looking at the DxO documentation, it's not at all clear what they are measuring, but you are right -- the score could only depend on the matrix transform if they were measuring color sensitivity within the sRGB gamut.  However, given the way they do all their other measurements, I suspect the result is calculated from the raw data rather than via demosaic etc.  From the error ellipsoid in the raw data, one can transform it to an error ellipsoid in a standard color space and determine the number of such ellipsoids that fit inside sRGB gamut.

So we agree on the need to do color space conversion and above all the need to avoid 'etc' (e.g. sharpening, noise reduction and whatever a real world raw converter might do).

Hence my attempt to explain the metric by linking it to the term "chroma noise". "Color sensitivity" or worse "portrait usecase" don't give much of a hint.

The demosaicing indeed sounds unnecessary because there is no spatial information in the signal. But they need to estimate per-pixel noise levels starting with Bayer matric data. A quick and dirty approach would be to demosaic. But I guess you are right (given their "science style") that they measure noise in each color plane and then compensate for the differences in resolution of the various color planes. Thanks.
Title: Re: DxO Sensor Mark
Post by: ejmartin on February 03, 2011, 09:49:48 am
The problem with using demosaic is that it adds a whole host of poorly controlled variables.  Demosaic error will be a substantial contributor to chroma noise, since it is a mis-estimation of missing colors.  Just look at the variety of algorithms that have been coded into dcraw/libraw.

BTW, there is a second possibility for such a measure -- convert to Lab rather than sRGB using matrix transform.  Then instead of determining the number of distinguishable colors within the sRGB gamut, one could determine the number of colors within the 'camera gamut' ie the image of the XYZ parallelipiped bounded by the camera primaries.  The latter would measure the total number of colors the camera can reproduce, rather than just the number within the sRGB gamut.  Typical 'camera gamuts' defined this way are closer to prophoto.
Title: Re: DxO Sensor Mark
Post by: Ernst Dinkla on February 04, 2011, 10:42:26 am
Peter,

A nice explication of the DxO numbers and a good defence here in the forum. As far as I can follow the subject :-) You deserve more than an aluminum medal I would say.

On the good color accuracy of Capture One as mentioned in the comments, is a sensor without an anti-aliasing screen not the best base to start from and more likely the cause of its reputation? In that sense the K5 sensor should be quite capable too. A camera I recommended to an artist with a tight budget who liked to document his paintings. Before I discovered your article here. Whether the DxO categories have any value: landscape, sports etc, for the gamut volume/color distinction the term "reproduction photography" would have been more adequate than "portrait".


met vriendelijke groeten, Ernst Dinkla

New: Spectral plots of +230 inkjet papers:
http://www.pigment-print.com/spectralplots/spectrumviz_1.htm

Title: Re: DxO Sensor Mark
Post by: ErikKaffehr on February 05, 2011, 03:15:37 am
Hi,

I agree with all your points. Regarding color accuracy, much depends on color processing, it's about sensor, raw-processing, color profiles and so on. I'd really suggest that using a Macbeth Color Checker is a good idea. In my Windows days I used a program called Picture Window Pro that could use a Color Checker to do accurate color matching.

http://dl-c.com/content/view/47/74/


Best regards
Erik

Peter,

A nice explication of the DxO numbers and a good defence here in the forum. As far as I can follow the subject :-) You deserve more than an aluminum medal I would say.

On the good color accuracy of Capture One as mentioned in the comments, is a sensor without an anti-aliasing screen not the best base to start from and more likely the cause of its reputation? In that sense the K5 sensor should be quite capable too. A camera I recommended to an artist with a tight budget who liked to document his paintings. Before I discovered your article here. Whether the DxO categories have any value: landscape, sports etc, for the gamut volume/color distinction the term "reproduction photography" would have been more adequate than "portrait".


met vriendelijke groeten, Ernst Dinkla

New: Spectral plots of +230 inkjet papers:
http://www.pigment-print.com/spectralplots/spectrumviz_1.htm


Title: Re: reproduction photography
Post by: Peter van den Hamer on February 05, 2011, 07:38:39 am
On the good color accuracy of Capture One as mentioned in the comments, is a sensor without an anti-aliasing screen not the best base to start from and more likely the cause of its reputation?

Nice to hear from you again.

Would removal/omission of an anti-aliasing filter (Phase One backs) improve color accuracy? In theory, it shouldn't for larger areas: it just blurs the image slightly to remove details with the sensor couldn't resolve. A blured color test patch should have the same color as a sharp test patch. But aliasing with a low-res Bayer matrix sensor can give weird colors on fine striped details under special conditions.

From what I read, it sounds like the color accuracy of a Phase One back and a Capture One raw converter "just" reflect a lot of attention to accurate profiling of the camera. It probably helps if the entire workflow is supported by one vendor. Maybe they calibrate individual backs in the factory. Undoubtedly some of the legend is also just good marketing, given that user "error" can easily screw up color accuracy ;-)

In that sense the K5 sensor should be quite capable too. A camera I recommended to an artist with a tight budget who liked to document his paintings. Before I discovered your article here. Whether the DxO categories have any value: landscape, sports etc, for the gamut volume/color distinction the term "reproduction photography" would have been more adequate than "portrait".

Just to avoid any misunderstandings: DxOMark Sensor does not directly measure color accuracy, but it does measure color noise.
Color accuracy is probably more about doing the things right (e.g. illuminant, homogeneous lighting, Raw vs JPG, printer profiles, screen profiles) than having the best equipment. That said, the K5 (or D7000 or A580) is currently state of the art for its price. I can imagine that X-Rite's Color Checker Passport would help for reproductions, although a graphic artist may only want to take a snapshot with flash and use the JPG straight from the camera. Until you convince them to calibrate their monitor, the rest can wait.
Title: Re: DxO Sensor Mark
Post by: bjanes on February 05, 2011, 11:37:49 am
It is the number of distinguishable colors in the raw data output:

http://dxomark.com/index.php/en/Learn-more/DxOMark-database/Measurements/Color-sensitivity

basically the number of ellipsoids the size of a noise std dev deltaR*deltaG*deltaB that fit within the 'gamut' of the camera (the overall range of R,G,B it can record).

Dxomark does not fully document many of its methods, but they do state that the data are input into the DxoAnalyzer (http://info.dxo.com/demokit_analyzer/index.html), and one can glean a lot of information from documentation for that product. For example, color sensitivity is plotted as ellipses on a CIE Lab chart. As Emil has stated, the color sensitivity of the camera depends on how many of these ellipses (actually ellipsoids in a 3 dimensional plot) fit into the camera space.

(http://bjanes.smugmug.com/Photography/DXO/NoisecovarianceColor/1178153474_2XU6k-O.png)

How much sensitivity is needed for human perception depends on the color sensitivity of the human visual system, which can be shown by Macadam ellipses (http://en.wikipedia.org/wiki/MacAdam_ellipse). The plot here is in the CIE xyY space at a given luminance. The CIE xyY space is not perceptually uniform, and the greens are exaggerated. Again an ellipsoid fitting calculation could be done to determine the number of colors.

(http://bjanes.smugmug.com/Photography/DXO/MacadamEllilpses/1178153417_wKf5b-L.png)

So it is a measure of color richness.   As a practical matter, it should be combined with information from the metamerism index (the degree to which the subspace spanned by the CFA spectral responses overlaps with that of the CIE standard observer) as well as the information that DxO measures on the map from color primaries of the camera to sRGB primaries.  For instance if there are large coefficients in the latter one will have larger chroma noise when mapped to standard output color spaces; see

http://dxomark.com/index.php/en/Our-publications/DxOMark-Insights/Canon-500D-T1i-vs.-Nikon-D5000/Color-blindness-sensor-quality

Basically large coefficients in the color matrix lead to amplification of chroma noise. 

Color accuracy is demonstrated by a CIE Lab plot at a given luminance, similar to what is done in Imatest:

(http://bjanes.smugmug.com/Photography/DXO/ColorFidelity/1178155125_D5gRQ-O.png)

Actually, the Phase One P65+ has the same problem in the red channel as the Canon 500D analysis to which Emil refers: it has nearly as much response to green as red.

(http://bjanes.smugmug.com/Photography/DXO/P65PlusColor/1178167776_hSFgx-O.png)

The D3x red channel has a better response and its matrix coefficients are smaller. However, the greater sensor area of the P65+ results in less chrominance noise and its print color sensitivity is slightly better than that of the D3x. However, the D3x has a better per pixel (screen) sensitivity. Whether or not such differences can be perceived in prints is open to question.

Regards,

Bill

(http://bjanes.smugmug.com/Photography/DXO/D3xColor/1178167839_gefYc-O.png)

Title: Re: DxO Sensor Mark -> 130 cameras
Post by: Peter van den Hamer on February 10, 2011, 06:57:43 pm
I did some minor maintenance on the diagrams in the DxOMark Sensor article (http://www.luminous-landscape.com/essays/dxomark_sensor_for_benchmarking_cameras.shtml):

Note that both Canon cameras and both 80 MPixel medium-format models have not been tested yet by DxOMark, so you will only encounter them in Figure 3 (http://www.luminous-landscape.com/articleImages/PvdH1/DxOMarkSensor_Fig3_134.png).

Peter
Title: Re: DxO Sensor Mark -> 130 cameras
Post by: Bart_van_der_Wolf on February 11, 2011, 06:36:00 am
I did some minor maintenance on the diagrams in the DxOMark Sensor article (http://www.luminous-landscape.com/essays/dxomark_sensor_for_benchmarking_cameras.shtml):

Hi Peter,

Thanks for that. There is however something that 'annoys' me, since it may mislead some casual readers.

Your comparison between coarser or denser sampling suggests that the signal to noise statistics are basically identical. You also state that "This gets us back to “smaller pixels give higher noise levels per-pixel”. But per-sensor-pixel noise is the wrong metric for prints (or, for that matter, any other way to view an image in its entirety)".

The "any other way to view an image in its entirety", only applies to identical area measurements, same size (downsampled) output (and disregards the effect of non-linear gamma conversion of noise). It does not hold when the additional sensor density is required and used for producing larger output (a major reason why I would consider buyng a system with higher MP count such as a MF back). Then the reduced per pixel DR due to smaller charge capacity will hurt image quality. Sensels will either saturate, or shadows will drown in noise, when the scene offers common luminance ranges. Downsampling saturated or read-noise dominated sensels will not produce the same statistical S/N ratio as a larger sensel would.

That's why the DxO DR data are given for both downsampled (identical output/"print" size), and per sensel (native size) scenarios. Those who need large format output should seriously look at the per sensel data rather than the downsampled "Print" metrics.

Cheers,
Bart
Title: Re: DxO Sensor Mark - resolution normalization
Post by: Peter van den Hamer on February 11, 2011, 12:42:14 pm
"Thanks for that. There is however something that 'annoys' me, since it may mislead some casual readers."

It is indeed tricky material. It is easy to compare things in the wrong way and then reach very wrong conclusions. Expecially when the math is not entirely intuitive.

You seem to disagree with a key conclusion, so it may help walk though the intermediate steps explained in the article. So let's see what we agree/disagree on step-by-step:

1) If you need a bigger print, you may need more MPixels than a smaller print. I think we agree there. Some people want razor-sharp large prints - fine (technically I don't care if it is a real need or not). Then a 40, 60 or 80 MPixel camera could help. A 10 Mpixel camera simply will lose some information when the scene contains enough high-frequency information (Nyquist).

2) DxOMark Sensor scores are basically only about noise and dynamic range. Resolution may be relevant, but is an independent measurement. DxO says: if you need/desire a particular amount of Mpixels, just filter out any camera that don't have that. For any cameras that has enough resolution, you then compare the noise as follows..

3) To compare noise and dynamic range of cameras, you have to convert them to the same resolution. Say you need 20 MPixels, but want to compare a 40 MPixel camera to a 160 MPixel camera, noise levels or dynamic range per pixel/sensel simply gives a misleading answer. Per sensel measurements would certainly tell you that the 160 MPixel camera has higher per-sensel noise. In reality the 160 MPixel camera may have better, worse or equal noise to the 40 MPix camera when compared at the same resolution. This "same resolution" could be 20 MPixels or 40 Mpixels, etc. Your "smaller pixels will hold less charge" (correct) and will have a worse Dynamic Range is not correct when you scale from per-pixel to per-image.

4) From your text I think you may believe (3) and the scaling formulas explained in the article when applied to "downsampling". Although I deliberately avoided formulas, the math for this is in my article and in a more technical white paper by DxO itself.

5) "Upsampling" (uprezzing in Michael-speak) is not relevant if you only consider camera's that have enough resolution. So in practice, you could avoid having to worry about it. One could, if somebody really wants this, extend the story to cover upsampling as well as downsampling, but the story becomes really tricky. You can upres without increasing per-pixel noise simply by duplicating pixels or by linear interpolation. But then you have suspicious artefacts in the image. The trick to make it look natural is to add just the right amount of noise (see stories about advanced sharpening techniques) to mask this. Actually the fact that DxOMark has a few 4 and 6 MPixel cameras that are scaled to 8 MPixel show that they use their scaling formula for both downscaling and upscaling. My gut feeling is that this is technicall/scientifically fair, but to be sure you need to read up on pretty theoretical information theory.

To get back to a more practical level:
Those who need large format output should seriously look at the per sensel data rather than the downsampled "Print" metrics.

I really don't recommend this unless all the cameras you are comparing happen to have the same resolution. You have to compensate for resolution differences to compare noise levels. If you want to see resolution itself, check resolution numbers or preferably real resolution measurements that include the optics (lens, CFA, AA filter). 

That's why the DxO DR data are given for both downsampled (identical output/"print" size), and per sensel (native size) scenarios.

Interesting question. Why provide per-pixel measurements if Peter (and I think also DxO) don't recomment using them for comparison purposes? As far as I know they don't even give you an easy way to compare per-pixel ("screen") data. I consider the per-pixel data as raw data. It is useful for manufacturers or others (like me) who want to check the computations.

Peter

PS: I ignored you "non-linear gamma correction" as I don't think it impacts the discussion. You can do gamma correction after generating the appropriate output resolution. There should only be minor differences if you do the corrections in the "wrong" order.
Title: Re: DxO Sensor Mark - resolution normalization
Post by: joofa on February 11, 2011, 01:22:16 pm

3) To compare noise and dynamic range of cameras, you have to convert them to the same resolution. Say you need 20 MPixels, but want to compare a 40 MPixel camera to a 160 MPixel camera, noise levels or dynamic range per pixel/sensel simply gives a misleading answer. Per sensel measurements would certainly tell you that the 160 MPixel camera has higher per-sensel noise. In reality the 160 MPixel camera may have better, worse or equal noise to the 40 MPix camera when compared at the same resolution. This "same resolution" could be 20 MPixels or 40 Mpixels, etc. Your "smaller pixels will hold less charge" (correct) and will have a worse Dynamic Range is not correct when you scale from per-pixel to per-image.


In talking about "per-image" statistics what is the notion of "noise" when the "signal", and how that signal is affected with which methodology is chosen to do resampling, is not even considered in your analysis above? Can this type of noise be treated in isolation to the signal?

Some related readings:

http://forums.dpreview.com/forums/read.asp?forum=1022&message=37680938
http://forums.dpreview.com/forums/read.asp?forum=1012&message=37572900


Joofa
Title: Re: DxO Sensor Mark - distinguishing noise from signal
Post by: Peter van den Hamer on February 11, 2011, 05:35:42 pm
In talking about "per-image" statistics what is the notion of "noise" when the "signal", and how that signal is affected with which methodology is chosen to do resampling, is not even considered in your analysis above? Can this type of noise be treated in isolation to the signal?

I looked at both postings, but not the entire threads. So I am trying to guess what you mean, and where you might be going with this.

Q: Can noise levels be measured using regular photos, e.g. of a cat? It would be hard to distinguish signal from noise, wouldn't it?
A: That's not how the DxOMark measurements were done.
DxOMark's "Protocol" documentation says that they measure noise using RAW images of neutral-density filters (=patches) that are backlit using a large diffuse light source. You can measure noise by looking at spatial (repeatable) variation: fixed-pattern noise. And by looking at temporal fluctuations (temporal noise) when you take lots of identical images of the identical source. As far as I know (I asked them in an E-mail) their published numbers are FPN and temporal noise added up. This means that in theory one image suffices.

Q: But could it be done with a real photo, e.g. of a statue of a cat?
A: I wouldn't. And DxOMark Sensor doesn't.
You would need to take multiple images to be able to distinguish noise (variation) from singal (average). But this would measure less noise that what DxO defines as noise (because you would miss FPN like dark current non-uniformity and photo-response non-uniformity). And a detailed scene would make the setup unnecessarily sensitive to vibrations and drift: you would see fake noise at sharp edges.

Q: When noise measured at one resolution is scaled to a reference resolution, is this sensitive to the scaling algorithm?
A: No. In DxOMark's procedure they measure noise of a 20 MPixel sensor at  20 MPix (MSensel) resolution. Then the resulting signal-to-noise ratio is corrected using a simple theoretical model. So there is no rescaling algorithm involved. In my article, I provide one or two examples of this that I checked by hand.

Q: Would you get the exact same results if you took the test image, rescaled it and then measured noise? In other words is the "simple theoretical model" accurate?
A: The model used for scaling corresponds to what a simple binning algorithm would do (e.g. replacing 2x2 small pixels by one fat one), assumes that you have a competent implementation (e.g. do the measurements and calculations in enough precision), and assumes Poisson noise with no correlation between pixels. It should thus be pretty accurate for the photon shot noise and dark current noise. The scaling may not apply for the FPN, but its scaling cannot be predicted, and it should be smallish. So the model is pretty accurate - and significantly, the model doesn't need to be fully accurate. It is just meant to provide a handicap to compensate for resolution differences: it doesn't attempt to accurately simulate actual devices.

Q: Do the numbers based on images of test patches have relevance to a real scene? Like Schroedinger's cat or the statue of a cat or somebody's cat?
A: Yes. Overal behavior of a sensor is reasonably well understood. Just like you can characterize the noise in an audio amplifier, rather than having to measure noise specifically when playing Beethoven sonates or even Neil Young.

Q: But what if the test patches do not generate homogeneous light patches on the sensor? How to deal with offset (=blackpoint) in the processing?
A: Measurement setups are indeed never perfect. These are serious issues. I mentioned those kinds of problems in an earlier posting.
Engineers and scientists will point out that numerous questions remain about test details for any precision measurement: e.g. light source homogeneity, light source stability, finite test patch size, vignetting, dust on the source, dust on the optics. I can assure you (I worked for years in labs) that precisions measurements are a major headache. Some of these issues are nowadays covered by international standards where the experts jointly develop measurement protocols. DxOMark is active in some of these committees (source: LinkedIn and private communications). And DxO says that outside engineers regularly get to see the setup and discuss the procedures used. This is normal in engineering: if you challenge my measurement results, I either need to exhaustively document measurement details and you review them, and/or you send in experts to see if you can find a flaw in the measurments. You can bet that a major manufacturer will contact DxOMark whenever their products get lower scores than hoped for.
You can do a rough check yourself by examining the slopes of various graphs against theory. But the data seems good enough for comparing sensors. And checking for ever more subtle pitfalls in measurements is best left to the manufacturers who hope to see performance increases (that are increasingly hard to measure) in their latest designs.

By the way, http://peter.vdhamer.com/2010/12/25/harvest-imaging-ptc-series/ is a posting about how to measure noise in sensors. I just summarized the material. The source is Albert Theuwissen, an expert on sensor design and modeling.
Title: Re: DxO Sensor Mark
Post by: joofa on February 11, 2011, 05:55:12 pm
Peter,

The point is that one can live in world of theory, photon shot noise, pixel level DR measurement, etc. The real world does not necessarily operate like this all of the time. At the end of the day, the usual problem is that we have a single image and all our notions of signal, noise, SNR, of "image-level" statistics must come from the image samples. If the image has FPN buried in it then so be it. Can't do anything about that. What has happened has happened.

The sensor people have a very different goal in life. They can even spend their life with a single pixel as all that matters to them is the notion of DR, noise etc. on that pixel. But that is not necessarily the goal of photography and any image processing applied to it.

Either we admit that we little theory to talk meaningfully about image-level statistics or we can try to interpret the image level statistics in a different light, which as a starting point and initial conditions takes the usual "sensor-level" stuff such as shot noise, read noise, DR, and builds a theory out of it that explains things in practise.

Sincerely,

Joofa
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on February 11, 2011, 07:54:36 pm
At the end of the day, the usual problem is that we have a single image and all our notions of signal, noise, SNR, of "image-level" statistics must come from the image samples. If the image has FPN buried in it then so be it. Can't do anything about that. What has happened has happened.

The sensor people have a very different goal in life. They can even spend their life with a single pixel as all that matters to them is the notion of DR, noise etc. on that pixel. But that is not necessarily the goal of photography and any image processing applied to it.

Joofa,

There are indeed people who look at noise and DR top down (users, people who review cameras) and bottom up (experts in the underlying technology). I looked into both because the bottom-up expertise can help find problems in the top-down experience. And vice versa.

DxOMark seems pretty close to your ideal of system-level testing: they test unmodified production cameras by literally taking pictures of gray charts under varying ISO settings. This creates raw files written to memory cards. The patches on the gray charts happen to be round and glass rather than rectangular and printed on paper. But that is because they need to sometimes measure at higher precision than was needed for yesterday's cameras.

Another thing that may bug people is that they report resolution and noise results quite separately from each other: they are 2 different benchmarks (one does ONLY noise, the other does MAINLY resolution). It certainly deviates from the more subjective approach of taking an image of a tree and concluding "the 80 MPixel back is a lot better than my 60 MPixel back".

The final point that could be strange is that they show all their data as numbers. You don't get to see the actual patches like other review sites. I can image that that doesn't appeal to some types of users. For quick-and-dirty choices, DxOMark's top level score should be more than enough. But they certainly don't provide test pictures taken in the field like say Photozone.de does to supplement its graphs. DxOMark's strategy is apparently to link to other reviews that specialize in that kind of thing.

Peter
Title: Re: DxO Sensor Mark - Greeks letters
Post by: Peter van den Hamer on February 12, 2011, 12:33:16 pm
I fixed some Greek letters that had dropped out during the text transfer. If you find any others errors, please let me know.
Title: Re: DxO Sensor Mark
Post by: ErikKaffehr on February 12, 2011, 03:09:53 pm
Hi,

The problem with sample images is that very few are usable. Very few images have in focus details at corners. Even if the images are OK it may be impossible to compare two different lenses if subjects or conditions are not identical.

Best regards
Erik

Joofa,

There are indeed people who look at noise and DR top down (users, people who review cameras) and bottom up (experts in the underlying technology). I looked into both because the bottom-up expertise can help find problems in the top-down experience. And vice versa.

DxOMark seems pretty close to your ideal of system-level testing: they test unmodified production cameras by literally taking pictures of gray charts under varying ISO settings. This creates raw files written to memory cards. The patches on the gray charts happen to be round and glass rather than rectangular and printed on paper. But that is because they need to sometimes measure at higher precision than was needed for yesterday's cameras.

Another thing that may bug people is that they report resolution and noise results quite separately from each other: they are 2 different benchmarks (one does ONLY noise, the other does MAINLY resolution). It certainly deviates from the more subjective approach of taking an image of a tree and concluding "the 80 MPixel back is a lot better than my 60 MPixel back".

The final point that could be strange is that they show all their data as numbers. You don't get to see the actual patches like other review sites. I can image that that doesn't appeal to some types of users. For quick-and-dirty choices, DxOMark's top level score should be more than enough. But they certainly don't provide test pictures taken in the field like say Photozone.de does to supplement its graphs. DxOMark's strategy is apparently to link to other reviews that specialize in that kind of thing.

Peter
Title: Re: DxO Sensor Mark - figures
Post by: Peter van den Hamer on February 12, 2011, 05:50:44 pm
FYI: I fixed a typo in Figures 2 and 3 (Powershot G7 -> G9).
I also added 11 new cameras to Fig3. These are important cameras that haven't been tested (yet).
The fact that they are listed is not an indication that all will be tested. We can guess that various (e.g. Canon 600D) will be tested while some may not (Hasselblads?, Leica S2).

[Update: DxO wrote in a forum that they would like to get hold of (rent/borrow) a Leica S2]

Panasonic / Lumix DMC GF2 / 12.1 MPixel
Canon / EOS 1100D / 12.2
Fujifilm / FinePix X100 / 12.3
Olympus / SP 610 UZ / 14.0
Samsung / NX 11 / 14.6
Canon / EOS 600D / 17.9
Leica / S2 / 37.5
Hasselblad / H4D-50 / 50.1
Hasselblad / H4D-60 / 60.0
Phase One / IQ180 / 80.0
Leaf / Aptus-II 12 / 80.0

Title: Re: DxO Sensor Mark
Post by: joofa on February 12, 2011, 07:10:04 pm
Peter,

I understand what you are saying and appreciate the effort regarding measuring "sensor-level" statistics. But my point was that such numbers don't necessarily describe "image-level" statistics. The way I see it we should make a distinction between the two.

Joofa
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on February 13, 2011, 05:34:38 am
I understand what you are saying and appreciate the effort regarding measuring "sensor-level" statistics. But my point was that such numbers don't necessarily describe "image-level" statistics. The way I see it we should make a distinction between the two.

If that means taking an arbitrary photo and measuring the noise, you will get stuck unless the subject is well defined (patches, test chart): as you seem to suspect you cannot distinguish between signal and noise if you can't accuractly predict the signal. So it sounds like a dead end for automated/objective testing. And taking a photo of a well defined subject (patches, test chart) is what DxO is doing.

What may bother you is that a human (as opposed to software) can sometimes compare two images and tell you which camera is better. The problem is that humans compare the signal+noise to their expectation of the signal. They know whether or not a cat's fur looks grainy.

Peter
Title: Re: DxO Sensor Mark
Post by: Bart_van_der_Wolf on February 13, 2011, 08:53:13 am
If that means taking an arbitrary photo and measuring the noise, you will get stuck unless the subject is well defined (patches, test chart): as you seem to suspect you cannot distinguish between signal and noise if you can't accuractly predict the signal. So it sounds like a dead end for automated/objective testing. And taking a photo of a well defined subject (patches, test chart) is what DxO is doing.

One can go a long way by subtracting two images, which is of course easier with stationary subjects. Simple subtraction of images with equal exposure times will result in random noise (Standard Deviation needs to be divided by Sqrt(2) when 2 images are used) and elimination of signal and pattern noise. This is the better way of determining random noise levels in test patches. I don't know if DxO uses that to reduce the fixed pattern noise e.g. caused by amplifier variations.

Cheers,
Bart
Title: Re: DxO Sensor Mark
Post by: Peter van den Hamer on February 13, 2011, 02:07:36 pm
This is the better way of determining random noise levels in test patches. I don't know if DxO uses that to reduce the fixed pattern noise e.g. caused by amplifier variations.

DxO said that their noise figures include fixed pattern noise. Your approach would, as you indicate, not see the fixed pattern noise. Fixed pattern noise should be reported as it can be substantial part of the noise at high ISO (at least in the 5D2 generation cameras - not sure about the K-5 generation).

What I expect DxO does is generate a smoothed polynomial describing the spatial intensity of the light reflected off the patch.
An then subsract this background level from each pixel to determine noise. This compensates for low-frequency gradients (non-uniformity due to lighting, filters, vignetting) which are not perceived as noise by observers. In fact, humans hardly see such low-frequency gradients at all. And this technique would measure high-frequency (temporal and fixed) variations which are perceived as noise.

The statement by DxO that FPN is included in their noise figures is consistent with their protocol which mentions that it is important that the test targets are cleaned (dusted?) regularly. Dust on the test target would be measured as noise. It is also consistent with their avoidance of printed targets which tend to have some degree of noise.

Peter