Great article. A few minor things to correct:Agree really interesting article, appreciate the time spend researching and writing it.
Another way to look at the relevance of resolution: nowadays, unless you use expensive lenses on a relatively cheap camera, cameras tend to have enough resolution to handle what the lenses can project onto the sensor. And for most uses, 12-18 MPixels is more than enough anyway. So a properly designed noise benchmark can be used to predict image quality as long as you keep an eye on whether you have enough resolution for your needs.
Peter- Many many thanks for a really superb article. Best thing I've read on the best Photo Site in the Universe. (So good that I am replying after many years of just lurking on LuLa.....).
The one thing my fuzzy little brain has trouble with is the rather huge Elephant in the Room with all of this (which, your article correctly didn't cover- being a discussion of DxO marks specifically) - i.e. diffraction softening - surely, in everyday practical terms, the biggest limitation to image quality, along with resolving power of the lenses.
Cramming more and smaller pitch sensels onto chips means that we will be limited to a very restricted range of apertures with these new high MP count dslrs such as the D800.
As a vague rule of thumb I've always worked on the 1/Pixel Pitch in Microns = smallest aperture worth using e.g. 8microns gives f8 (depends on how fussy you are with circles of confusion and all that of course).
The whole issue of how many MP we can cram on to a 35mm sensor, once discussed regularly seems to have been sidelined as SNR ratios have improved but surely we are close to some sort of nyquist limit already - as Scotty from Star Trek said "ye canny change the laws o physics, Jim".
I speak as a product photographer who needs both depth of field and the highest resolutions possible and who, being Scottish, would much prefer to spending £3K on a DSLR to £30K on a new back to achieve both decent Dof and Res.
I know very little of these laws o physics but I'd really appreciate your thoughts..
Yours aye,
Alan
The recorded resolution is always a product of both lens resolution and sensor resolution. Increase the resolution of either one, and the recorded resolution will also increase.
In effect, when one upgrades to a sensor with more megapixels, one automatically upgrades all one's lenses as a free bonus.
What the upgrade from a D7000 to a D800 does in effect, is not only upgrade the resolution of one's lenses, but also effectively converts all one's prime lenses into telephoto lenses in relation to those standards of the lower-pixel-count D7000, and extends the range of all one's zoom lenses in relation to the standards of the D7000. Such an upgrade is almost priceless.
Actually, I realized, seldom I read on the web articles similar to these Saturday’s delights these 19th cent. established periodicals were bringing me.
huge Elephant in the Room with all of this (which, your article correctly didn't cover- being a discussion of DxO marks specifically) - i.e. diffraction softening - surely, in everyday practical terms, the biggest limitation to image quality, along with resolving power of the lenses.
Ray,
"Product of" would obviously be wrong, but I guess you meant "function of".
But that claim is like saying "for a given lens, increasing sensor resolution will not reduce overall image sharpness" - which is kind of hard to disagree with
This gives 55 line_pairs/mm on a 12.7 MPix Canon 5D. Assuming the lens could keep up with the increased resolution of the 5D2 (there is no DxO lens data for 5D3 yet), it would give 71 lp/mm (= 55 lp/mm * sqrt(21.1/12.7)).
I have written scientific articles in the past (not on image sensors, mind you). But the 19th century is even before my time. :P
The D3200 was a perfect excuse and with the difference between the D7000 I could actually ditch* my old 50mm, and 105 micro for new successors.
DxOMark's Dynamic Range plot for these cameras shows that their Dynamic Range drops by almost one 1 EV each time the ISO is doubled. This resembles an ideal amplifier that amplifies the sensor’s signal and noise without adding noise of its own. That is impressive.When DR drops by 1EV for every doubling of ISO, that is hardly impressive -- it is exactly what you would get by not changing ISO and simply underexposing more and more, then multiplying the raw image by factors of two as needed. What is impressive about the amplifiers in cameras with very high DR is their extremely low intrinsic noise, compared to the sensor full well capacity. The ability to effectively adjust ISO, in a way which improves image quality compared to underexposing, would show up as a DR curve which drops less than 1 EV per ISO doubling.
Would it be possible to plot DxO score versus total sensor area (pixel count x pixel size)? That might show a pretty high correlation.
Thank you for an excellent post. One area that was not covered is the difference between pixel binning in the sensor (hardware binning) and binning post capture by downsampling the image (software binning). At very low levels of illumination where read noise predominates over photon noise, software binning can not make a small pixel sensor perform as well as a larger pixel sensor with the same total sensor area.
"Note that although this scaling story holds for photon shot noise and dark current shot noise, other noise sources don’t necessarily scale the same way. In particular, some very high-end CCDs can use a special analog trick (“charge binning”) to sum the pixels, thus reducing the amount of times that a readout is required. This would reduce temporal noise by a further sqrt(N) where N is the number of pixels that are binned. Apart from the fact that only exotic sensors have this capability (Phase One’s Pixel+ technology), DxOMark’s data suggest that this extra improvement doesn’t play a significant role."
the label Portrait did not cover the content well, we all agree. But I considered it a good measurement for art reproduction jobs with ample (full spectrum) light allowing optimal settings for all components including a low Iso setting. The jobs where you would expect color calibration, profiling from camera to print and larger print sizes, the last revealing chroma noise when available which could influence a 1:1 reproduction of colors (with the original still there to check against). The dynamic range of a sensor will not be challenged when reproducing reflective art originals so is of less importance. Maybe "Portrait" could be replaced with "Still Life" or "Reproduction".
The Print "filter" on the data may not reflect daily practice for camera users that work with large format printers. I have not checked that again but I got the impression some years ago that the print filter could be too conservative in time. The filter brings sensor qualities much closer to one another than what large prints can reveal.
Would it be possible to plot DxO score versus total sensor charge capacity (pixel count times pixel full well capacity)? That might show a pretty high correlation.
I agree that the Color Sensitivity benchmark should be very suitable for art reproduction work. But that is a niche for most of us. You might call it "studio". In the text I questioned whether people can actually see chroma noise under these conditions. I have no proof other than that people can hardly see the luminance noise at medium gray at 100 ISO, and that chroma noise at the pixel level should be even harder to see. Anybody feel like generating some test images (of gray patches) using MatLab to simulate what the latest cameras can achieve?
The print-mode data is just a way to normalize pixel-level noise to a common resolution. I agree that if you print large enough, you can do some serious pixel peeping - although that is easier and cheaper to do by clicking on "100%" screen viewing. I guess DxOMark considered pixel peeping more common on screens - even by non-photographers. Only some photographers (the nerdier ones?) inspect a print with their nose touching the print ;)
You probably already know this, but (to be on the safe side) this particular DxOMark benchmark data does not cover the contribution of sensor resolution to print quality. So if you pixel peep, this benchmark tells you what you will see when you are admiring the creamy richness of noise-free bokeh gradients :P
One very small quibble:When DR drops by 1EV for every doubling of ISO, that is hardly impressive -- it is exactly what you would get by not changing ISO and simply underexposing more and more, then multiplying the raw image by factors of two as needed.
The ability to effectively adjust ISO, in a way which improves image quality compared to underexposing, would show up as a DR curve which drops less than 1 EV per ISO doubling.
Ray,
"Product of" would obviously be wrong, but I guess you meant "function of".
But that claim is like saying "for a given lens, increasing sensor resolution will not reduce overall image sharpness" - which is kind of hard to disagree with :)
So, given that you recommend upgrading to a high res sensor to utilize the full capabilities of lenses (which I kind of did myself: 6 MPix APS-C 10D to 21 MPix FF 5D2), let's check how much this helps using your own example: a Canon 50mm/1.4 lens, used on full-frame cameras, and measuring the max/max/max resolution using DxOMark data.
This gives 55 line_pairs/mm on a 12.7 MPix Canon 5D. Assuming the lens could keep up with the increased resolution of the 5D2 (there is no DxO lens data for 5D3 yet), it would give 71 lp/mm (= 55 lp/mm * sqrt(21.1/12.7)).
Firstly, the highest resolution for any lens on any camera measured by DxOMark so far is only 67 lp/mm. So we can't expect 71 lp/mm for a lowly Canon 50/1.4 (successor is expected). Instead, the measured data for the 50mm/1.4 lens on the 5D2 is 63 lp/mm. This is a 15% increase compared to the three years older 5D design.
This confirms your (somewhat unrefutable) claim that higher resolution sensors contribute to higher resolution images. But it also shows that lenses don't really keep up with sensor resolution increases:
- decreasing the pixel pitch by 30% (=increasing the MPixel count by 75%) only results in a 15% lineair resolution increase.
- some of that 15% increase is probably due to unrelated camera improvements. Compare the pricey 85mm/1.4D on the Nikon D300s to the D700: both have a 12 Mpixel sensor resolution, but 8% lp/mm overall resolution difference (AA filter? crosstalk? processing?). So a modern 12 MPixel full-frame camera would presumably give better results than the old Canon 5D design. So the contribution of sensor resolution to the 15% measured overal resolution improvement might be below 10%. We could test this as soon as DxOMark tests the lens on the 5D3 (newer, roughly same resolution as 5D2).
- the 5D had huge pixels for its time. The 8 MPixel 350D was already out. If you scale the full frame 5D down to 1.6x APS-C, you get a mere 5 MPixel camera. So the 5D had unusually low resolution for its time. A simplistic calculation for the Canon 5D says that it cannot outresolve 59 lp/mm (59 pairs/mm * 2 lines/pair * 24mm * 59 * 2 * 36mm = 12 MPixels).
- the single figure resolution numbers provided by DxOMark are at each lens' best aperture, at the best zoom setting, and in the middle of the image. This max/max/max measurement obviously flatters the true capabilities of the lens.
- we are doing the excercise at full-frame. Many people have smaller sensors. An APS-C sensor has 1.5 or 1.6 times smaller pixels than a full-frame sensor with the same resolution. This means that "increasing resolution to get the most out of your lenses" will probably give even less benefit for APS-C or smaller cameras. Arguably you need more than a 12 MPixel medium format camera, but these are not for sale, and users are less likely to crop there.
The story that if you migrate from APS-C to full-frame, that you should increase resolution if you plan to crop back to APS-C makes a lot of sense, assuming that you owned an FX lens on a DX camera. Incidentally, the 36.6 MPixel D800(E) and the 16 MPixel D7000 both have a pixel pitch of 4.8-ish micrometer. The numbers are so alike, it may not be a coincidence (some product manager said "scale D7000 sensor to full-frame!"). The blue scaling line in Figure 6 shows that the D800 even performs pretty comparably to 2 (actually 2.25) D7000 sensors tiled side-by-side.
Peter
Great article Peter.
I have to disagree with Ray. You can't just increase sensor resolution forever, expecting the lenses to handle it. The camera system will function as well as the weakest part. It so happens that lenses are still way ahead of FF sensors. All you have to do is look at the pixel size of the cheap compacts. Of course the lenses on our SLR systems are as good or better than the compacts. Ray is correct now, yes. The terminology assumed lenses can handle anything which is not right.
A Bayer system would hit the wall around 2x the red wavelength 720nm or .7 microns = 1.4 microns. Most DSLR sensors are 5 to 6 microns for FF or 4 to 5 microns for APS-C. There is still free resolution for now.
Changing the ISO is just a term for underexposing and compensating in the camera (via analog and sometimes digital scaling). Your "exactly what you would get" only applies to an ideal case. Approximating the ideal case is IMO pretty impressive.Digital scaling, either in camera or in post-processing (multiplying raw image integer values on a computer long after the exposure), is exact and requires no impressive electronics. The impressive electronics for ISO control is analog scaling which multiplies signal and shot noise more than readout noise, improving the slope of the curve at low ISO (where readout noise is significant).
Digital scaling, either in camera or in post-processing (multiplying raw image integer values on a computer long after the exposure), is exact and requires no impressive electronics. The impressive electronics for ISO control is analog scaling which multiplies signal and shot noise more than readout noise, improving the slope of the curve at low ISO (where readout noise is significant).
I think you've got things back to front here when you write; "You can't just increase sensor resolution forever, expecting the lenses to handle it."
Lenses do not handle sensor resolution. It's the other way round. A lens will always project an image of a certain quality and resolution, depending on the aperture used, regardless of the quality of the sensor.
However, something is always lost as the sensor records the projected image from the lens, even when it's a low quality lens and high pixel-count sensor.
If the lens used is a high quality lens, then of course more is lost in the recording process than would be the case using a low quality lens with the same sensor.
However, setting aside semantics, you are correct that one cannot expect unlimited increases in sensor resolution to continue to provide increased resolution from the same lens. As I mentioned in my reply to Peter, there's a situation of diminishing returns that applies so that at some point any increase in the resolution of the recorded image due to increases in sensor resolution will be so small that it will be unnoticeable in practice.
You can test this for yourself if you've not discarded all your earlier, low resolution cameras, and you don't even have to acquire a low-resolution lens for the purpose. Any 35mm format lens can be considered low resolution from F11 to F32, if it stops down that far.
The impressive electronics for ISO control is analog scaling which multiplies signal and shot noise more than readout noise, improving the slope of the curve at low ISO (where readout noise is significant).That is only impressive as a solution to the problem of the part of the read noise that enters the signal after that analog amplification is applied, such as noise that enters during transportation of the analog signal across the edge of the sensor and then off the sensor to an off-board ADC. More impressive still is avoiding that part of the noise entirely by doing the ADC earlier, at the edge of the sensor, reducing sensor noise to mostly just the dark noise from within the photosites themselves. Some hallmarks of this avoidance of read noise from "downstream" sources are that the sensor noise is lower, and scales in the same way as signal and photon shot noise, so that SNR scales exactly with Exposure Index(*). These desirable characteristics are shown by many of the newer sensors used by Sony, Nikon, Olympus and Panasonic.
(*) A little rant on misuse of jargon: can we please stop using ""ISO" as the name for multiple related but different measurements?I completely agree : as many sensitivity measurements coexist in the same norm, it would be nice to know if one is looking at a saturation-based, SNR-based, gray level-based one or a REI (which is a cool acronym for a completely arbitrary number just made to sound good in tech specs ;D ).
[...] where the disadvantages of using "ISO saturation speed" as a sensitivity measure are mentioned.The disadvantages are the same for all output-based methods, they take into account a rendered image, and are biased by rendering choices : tone curve, highlight reconstruction, and so on.
... REI (which is a cool acronym for a completely arbitrary number just made to sound good in tech specs ;D ).The REI serves one purpose, probably of little interest to us though: getting out-of-the-camera JPEG's that look reasonable when using fancy multi-zone automated metering systems in casual point-and-shoot photography. My guess is that DSLR's and other system cameras use SOS, not REI.
The disadvantages are the same for all output-based methods, they take into account a rendered image, and are biased by rendering choices : tone curve, highlight reconstruction, and so on.
A saturation-based method makes much more sense to my eyes if it uses saturation of the raw file, as I understand DxOMark does (http://www.dxomark.com/index.php/About/In-depth-measurements/Measurements/ISO-sensitivity).
I'd even be tempted to think that it is the only unambiguous method for determining sensitivity ...
I like to print LARGE! Really LARGE. Things like 72" x 48" are awesome, and it is what I do. For me resolution matters...mostly in how many or few frames I need to include in my mosaic panoramic image to get somewhere between 240 DPI and 300 DPI where I like to print. The normalization of resolution is necessary for comparison when folks won't use all that excess resolution. For this reason, the screen measurements are slightly more helpful to me.
Very detailed explanation, thanks.
But article says practically nothing about colour.
Here are some graph updates to cover new DxOMark data forYou may need to log in to see the figure.
- Sony Alpha 99 : 89 (;D)
- Pentax K5 IIs : 82
- Canon 6D : 82
- Nikon 1 V2 : 50 (hmmm)
On the other hand, when comparing noise level and DR at elevated exposure index, only SOS makes sense, because that compares at the same shutter speed when using the same aperture ratio in the same lighting. Pushing the DR and SNR 18% curves down to the left and thus down due to RAW files having more than 1/2 stop of highlight headroom (which DxO does) makes no sense when comparing low-light performance, so those graphs are best read by pushing the dots at ISO 200, 400, etc. back into alignment. Fortunately it seems that the "full SNR" curves as DxO are labelled with the camera's "ISO speed" settings or 200, 400, etc., so those can be compared without adjustment.
For the purpose of testing ISO, DXO do not use a lens, otherwise the results would be all over the place. This is why too much nit-picking attention directed to small differences in ISO serve little purpose for the practical photographer who has to use a lens.
So I guess that DxO uses a lens in some form or other (commercial lens, collimator) to measure ISO sensitivity. This reduces the amount of stray light that bounces around the room and even bounces around inside the camera.
Testing protocol for ISO sensitivity
The purpose of (saturation-based) ISO sensitivity measurement is to measure the exposure necessary to reach a given sensor's saturation point.
To measure the camera sensor’s ISO sensitivity, we set up the camera body alone (without a lens) on a stand to receive light from a controlled source.
The source is positioned far enough away from the camera sensor to ensure good light uniformity on the sensor plane. We then precisely measure the illuminance received by the sensor with a certified lux-meter.
DxO says "To measure the camera sensor’s ISO sensitivity, we set up the camera body alone (without a lens) on a stand to receive light from a controlled source."
Unfortunately if you cannot separate lens from body, there doesn't seem to be a way to distinguish between the efficiency of the sensor and the light losses within the lens.
Hi Peter,
I'd wondered about that and also assumed that there'd be no way to distinguish between the efficiency of the sensor and the transmission efficiency of the lens if one couldn't separate lens from body.
However, what seems remarkable is that certain models of P&S cameras with fixed lenses seem to test exactly spot on with regard to ISO sensitivity according to DXO standards, which seems a bit of a coincidence.
Hi Ray,
Perhaps it is because with a fixed lens, the ISO is already compensated for known lens transmission losses. We've also seen such under-the-hood gain adjustment behavior (http://www.luminous-landscape.com/forum/index.php?topic=47800.msg398677#msg398677) for DSLRs when the Aperture value gets wider than f/2, which is when a small compensating amplification boost of the signal is detectable.
Cheers,
Bart
Hi Bart,
If this is the case, then one wonders why the manufacturers of more serious cameras with detachable lenses, knowing that all their lenses have T/stop values which are numerically greater to some degree than the F/stop value, do not at least attempt to compensate for such transmission loss by understating the nominated ISO values instead of overstating them.
Here is a response that I wrote to a similar remark.Yes, thank you.
If you do a text search on the article, please use "color".
The ability to differentiate colors is called “Color sensitivity” in the article (see Figure 8). The full DxOMark measurements include much more detailed measurements of how the camera handles color. This includes the color gamut which the sensor can handle, the metamerism index, and the spectral behavior of the color filters.It doesn't explain much, except simple counting of terms. Of course i want to believe that DXO uses most complicated and scientific bunch of methods, but how it exactly affect this strange so-called bits rating???
Maybe more honest to call article on such reputable resource as LuLa "DxOMark Camera Sensor Noise Explained" and tell readers, that third line of this Rating is meaningless?
DxOMark Camera Sensor ratings essentially measure image noise and dynamic range.and
Another reason why the term “Sensor” in the name of the benchmark can be a bit confusing is that the benchmark only covers the noise performance of the camera sensor.
tell readers, that third line of this Rating is meaningless?
Very detailed explanation, thanks.
But article says practically nothing about colour.
DXO gives some points with decimals (strangely called 'bits' - but term bit means binary digit!) to colour but it appear to be a simple measure of noise of three colour channels, nothing more.
But, in practical terms, its only relevant Blue channel, cause it less sensitive in all modern cameras.
Moreover, DXO calls this strange points meaningful for portraits! But no one shoots portraits for per-channel looking for RAW data.
DXO simply fails to say something useful about colour. And i wonder why. Cause people buy megapixels and noise levels?
Turning back to colour, its more interesting and much more critical to know, firstly the spectral transmission of the lens (because DXO is only company testing both lens and sensors), and secondly actual spectral sensitivity of sensels (and colour gamut of cameras).
It differs much between vendors and models (but not samples of this expensive stuff:-).
Software correction fundamentally cannot helps with this problem (until we convert photographer to painter;-) because theres many (infinite number, actually) of different stimuluses in real world which appears on three-chromatic cameras sensor like the same R-G-B values.
Of course, improving tonal resolution can help but not much (and in modern cameras with at least 12bit processing). It happens because spectral sensitivity of colour filter arrays on sensels is not ideal and each channel has some small (or big) differences with spectral sensitivities of human eye (and other sensors too).
Does DXO points says something about it? Nope. Pity.
Does we have many other ways to measure noise of digital cameras with enough practical accuracy? Yes we have it on many websites (and can shoot themselves or ask people to make samples and share RAW files).
And, finally, what so "exciting" in DXO points - simple website database interface giving us illusion of knowledge? Not much.
..metamerism index in the DxO test, which describes how well the sensor can match the 16 color fields of the X-rite color checker.
Not necessarily, BJL. Not all lenses used at the same aperture let pass the same amount of light.An issue that is irrelevant when using TTL light-metering. The only worry that I would have about the overstatement of a camera's sensitivity settings on a camera is if with comparable lenses and the same aperture ratios and apertures and lighting conditions, one camera uses a significantly longer exposure time than others when the cameras are at the same "ISO" sensitivity setting. The DxO measurements of raw level placement tell me nothing about that problem.
Meanwhile, I am happy to agree with the CPIA and ISO that the measurements based on the Standard Output Sensitivity as defined by ISO standard 12232, and as used by most or all cameras makers, are valid and useful, especially for those of us who use TTL light metering most or all the time.
The underlying physics is that a sensor can distinguish exactly the same colors as the average human eye, if and only if the spectral responses of the sensor can be obtained by a linear combination of the eye cone responses. These conditions are called Luther-Ives conditions, and in practice, these never occur. There are objects that a sensor sees as having certain colors, while the eye sees the same objects differently, and the reverse is also true.
I guess the easy way to visualize the above [being metameric failures in color representation] is as in the graphic below.(http://djjoofa.com/data/images/metamerism_illustration.jpg)
One should not assume that the SOS is used with his camera. The Nikon D800e uses REI (recommended exposure index). My 800e places the metered tone at 14% saturation, 2.82 stops below clipping. This is close to the 12.7% saturation of the old saturation standard and is in line with the DXO measured ISO of 73 for the nominal camera setting of 100. This value is 2.82 stops below saturation allows about 0.34 EV of headroom for the highlights. The saturation standard gives a saturation of 12.7%, allowing 0.5 EV of headroom for the highlights.The SOS standard allows for some rounding, to accomodate the conventional 1/3 stop increments, and the gap between 12.7% and 14% is within that tolerance. But I agree that in principle it is allowed by CIPA rules for a CIPA member like Nikon to use REI instead of SOS.
I am struggling whether this loss of information is equivalent to a lineair projection.
Question: when people calibrate colors, they are essentially (non-lineairly) un-distorting a 3 dimensional color space. Why is that needed?
In other words, if had a true Luther-Yves conformant sensor, would I still need non-lineair color calibration software (e.g. burried deep down in LR)?
The SOS standard allows for some rounding, to accomodate the conventional 1/3 stop increments, and the gap between 12.7% and 14% is within that tolerance. But I agree that in principle it is allowed by CIPA rules for a CIPA member like Nikon to use REI instead of SOS.
How are you measuring this 14%? SOS and REI are defined in terms of output in JPEG or such, not raw files.
P.S. Here are the definitions again, in the original CIPA standards that became ISO standards: http://www.cipa.jp/english/hyoujunka/kikaku/pdf/DC-004_EN.pdf
Since I use raw files for my work, neither the REI or SOS methods are applicable, so I use the saturation standard (as does DXO). Their method is shown here (http://www.dxomark.com/index.php/About/In-depth-measurements/Measurements/ISO-sensitivity).Thanks for the details Bill.
Update: formula for SMI enclosed.
Hi,
This is what DxO-marks says about SMI: http://www.dxomark.com/index.php/About/In-depth-measurements/Measurements/Color-sensitivity
Best regards
Erik
ErikKaffehrYou want something like this:?
Yes, Erik, thank you. Its exactly about i worried.
But two-digit index about conformity of only 24 color patches.
(Test target is printed and has much more narrower gamut).
What is the light source used in the test, the ISO prescription and the one actually used? Or is the test done over a range of color temperatures or light sources? I recall something like Canon sensors being more color accurate near 4000K light than at 5000K. Which probably does not give a good SMI quote. DxO did not test Sigma sensors but it would be interesting to see what their SMI result is.
Luther-Ives condition can not be met by RGB filtered sensors but another eye. Would multi-spectral imaging meet that condition?
There has been the Sony RGBE sensor: RD1http://en.wikipedia.org/wiki/RGBE_filter
For art reproduction multi-spectral imaging is already done.
The HP G4010 desktop scanner uses another approach by scanning twice with different light sources and stacking the samples with adequate algorithms.
The women that have tetrachromacy set another condition too :-)
The Portrait score as a reference to art reproduction like I thought and SMI not counting in "Portrait" scoring seems more or less contradictive. Better keep it at Portrait score if the main ingredient is color noise.
--
Met vriendelijke groet, Ernst
http://www.pigment-print.com/spectralplots/spectrumviz_1.htm
December 2012, 500+ inkjet media white spectral plots.
hjulenissenDo you have a reference for this?
Yes, but MaxMax.com uses JPEG - and this method completely wrong.
And about DXOs new results.
А99 - 1555 Low-Light ISO
RX1 - 2534 Low-Light ISO (pretty similar to the same sensor in D600)
As we know, a99 mirror takes around 1/4 of light.
How exactly DXO measured such big difference???
And about DXOs new results.Because DxO is not measuring the signal in photoelectrons counted by the sensor: it is looking at raw output levels produced at the various ISO exposure index settings on the cameras, after analog gain and analog-to-digital conversion. Thus, a camera maker's decision to apply a greater or lesser amount of analog gain at a given ISO setting will give a higher or lower DxO sensitivity "score", even from the same sensor tested with the same exposure H in lux-seconds (e.g same intensity of sensor illumination L in lux and same exposure time t in seconds; H = L*t, ISO defined exposure index EI = 10/H.)
А99 - 1555 Low-Light ISO
RX1 - 2534 Low-Light ISO (pretty similar to the same sensor in D600)
As we know, a99 mirror takes around 1/4 of light.
How exactly DXO measured such big difference???
By the way, my analysis of the DxO results for some different cameras with the same sensor confirm that the horizontal scales on its SNR and DR curves do not compare sensors at equal exposure in lux-seconds, so are misleading in comparing performance in low light conditions. You get far closer to comparison at equal exposure (i. e. at equal exposure index) if you move each dot on those curves horizontally back to the values 100, 200, 400 etc. indicated by the cameras' ISO sensitivity settings. That is, compare at the ISO exposure index values used by the camera, not the DxO raw file level saturation measurements.
You get far closer to comparison at equal exposure (i. e. at equal exposure index) if you move each dot on those curves horizontally back to the values 100, 200, 400 etc. indicated by the cameras' ISO sensitivity settings. That is, compare at the ISO exposure index values used by the camera, not the DxO raw file level saturation measurements.
The lower raw placement of the E-M5 does not at all hurt its IQ at equal exposure level.now imagine that you do raw+jpg shooting... with the approach when camera undersaturates you either have a good raw (and too bright JPG) or good jpg (and undersaturated raw)... that is not a problem for JPG shooters (if they like in camera JPG from a particular camera) or for raw shooters as they saturate as they want (at the expenses of usable preview).
now imagine that you do raw+jpg shooting... with the approach when camera undersaturates you either have a good raw (and too bright JPG) or good jpg (and undersaturated raw).I do not need to imagine, as I do use RAW+JPEG with the E-M5, and do not see any problem with either raw or JPEG files. The in-camera JPEGs have appropriate levels (not to bright or too dark), and there is no detectable problem with the raw files, with the default conversions by various raw convertors giving JPEGs similar to the in-camera ones default. And as I have tried to explain and illustrate, the lower raw level placement does no measurable harm to noise levels for the E-M5, which is near the extreme for low raw level placement.
I do not need to imagine, as I do use RAW+JPEG with the E-M5, and do not see any problem with either raw or JPEG files: the in-camera JPEGs have appropriate levels (not to bright or too dark), and there is no detectable problem with the raw files
, which various raw convertors give similar JPEGs with default settings.
And as I have tried to explain, the lower raw level placement does no measurable harm to noise levels.
But perhaps by "undersaturated raw" you mean a raw file
open in rawdigger and see the raw histogram... but I believe that you will call that "non detectable" :-)I only care about differences that are detectable in the final displayed prints, possibly measured by SNR measurements when the sensors receives equal exposure level. I do not care about a mere moving to the left or right of the numerical levels in the raw histogram since my end goal in to look at pictures, not histograms.
you mean converters like ACR/LR that apply hidden exposure correction (convert to DNG and see baseline exposure tag value)Again: I do not care about how the converter gets there, I only care about how the results look. I think that baseline exposure tag is there exactly to tell the raw converter where the midtones have been placed, because (despite some misunderstanding of the intent of the descriptive ISO standards) there is no prescriptive industry standard for where levels _should_ be placed in raw files, and camera makers are free to make their own choices. So this tag probably helps raw converters to produce a good default conversion.
that is not the issue... the effect of difference in gain itself provided that you deliver the same amount of light to the sensor is not that big (provided that you are not clipping by applying a bigger gain for example, etc)Agreed: that has been my main point all along!
I understand that you have no issue to leave a stop or more on the table even if you can have more light to the sensor, it is a personal call.You completely misunderstand. Please read the end of my previous post again about the difference between "ETTR" (which is what you are talking about when you mention "more light to the sensor") and the subsequent degree of amplification of that raw signal reflected in ADC output:
I only care about differences that are detectable in the final displayed prints, possibly measured by SNR measurements when the sensors receives equal exposure level.
I do not care about a mere moving to the left or right of the numerical levels in the raw histogram since my end goal in to look at pictures, not histograms.
and camera makers are free to make their own choices.
So this tag probably helps raw converters to produce a good default conversion.
By the way, almost any raw conversion involves an "exposure correction", since raw files almost always place midtones half a stop or more below 18%, so that a default raw conversion (flat tone curve of contrast adjustment) must be scaled up by a half stop or more.
My recurring question is when and why different levels of scaling up are better or worse. And you seem to agree that it does not matter much....that has been my main point all along!indeed that I agree - if you receive the same amount of light (aperture and exposure time) and you do not incur any clipping in raw or any ill effects (alleged non linearities, blooming, whatever) near well saturation point then changing just the gain may or may not be beneficial depending on your sensor and if there are benefits they might not be serious (depending on sensor)... for Canon-like sensors there might be some gains in shadows (while gain is applied to a signal, not to digital data), for Sony-like sensors there might be no practical sense, for Panasonic-like sensors (they are closer to Sony than to Canon) - you decide... and for cameras where gain is neither analog nor digital but merely a tag value in raw files it is even not a question... but I was trying to turn the tables 8) myself towards other aspects of the situation...
As far as I can tell, the concept of "camera equivalence" by Falk Lumo is identical to my "equivalent image" that was published on luminous-landscape in 2007.
Shifting gears here:
How can it be that an APS-C from Nikon (the D5200) outscores the full-frame 5DIII--84 to 81? Is this score a true reflection of IQ reality? Does this undersell the Canon camera, or is Canon THAT FAR behind?
I am really beginning to pine for sites that just show you the reference images and let you draw your own conclusions. This putting a number to it seems to inflate pretty minor differences and obscure the things that might matter to you...those things you see with your own eyes.
but the APS sensor places more demands on the lens which is not taken into account in the DXO testing.but then you shall use lenses designed for smaller circle, smaller sized optical elements (diameter of elements) shall have lesser issues when produced even vs FF lenses with only center of the frame used on those, isn't it so ?
Shifting gears here:
How can it be that an APS-C from Nikon (the D5200) outscores the full-frame 5DIII--84 to 81? Is this score a true reflection of IQ reality? Does this undersell the Canon camera, or is Canon THAT FAR behind?
I am really beginning to pine for sites that just show you the reference images and let you draw your own conclusions. This putting a number to it seems to inflate pretty minor differences and obscure the things that might matter to you...those things you see with your own eyes.
but then you shall use lenses designed for smaller circle, smaller sized optical elements (diameter of elements) shall have lesser issues when produced even vs FF lenses with only center of the frame used on those, isn't it so ?
Shifting gears here:
How can it be that an APS-C from Nikon (the D5200) outscores the full-frame 5DIII--84 to 81? Is this score a true reflection of IQ reality? Does this undersell the Canon camera, or is Canon THAT FAR behind?
I am really beginning to pine for sites that just show you the reference images and let you draw your own conclusions. This putting a number to it seems to inflate pretty minor differences and obscure the things that might matter to you...those things you see with your own eyes.
Hello. this will be my first answer here (you may have read answers from me at dpreview, Canon Rumors, etc.)
The APS sensor from Toshiba/Nikon and Sony/Nikon with raw-vise ADC on the sensor edge has both a high QE and very low read out noise regarding electrons which Canon can not compete with today because of theirs older sensor technology= long analog signal path way and theirs late amplifications stages.
This means that Canons DR will be inferior at base iso compare to Nikons sensors. Regarding picture quality and especially above 550 iso the Canon 24x36mm sensor will be better due the larger sensor size and the demands of the lenses are smaller with a 24x36mm sensor compared to the pixel density in a APS size sensor and area (contrast, resolution) What we can hope for is that Canon can reduce the high read out noise, banding and increase the QE
(my choice of sentence structure and word choice can be a little Swe / english)