Having read the recent postings in this thread, I still can't grasp Jonathan's concept here.
Let me start from the beginning with the foundational equation:
(Image quality) = (pixel quality) * (pixel quantity).
Lets start with pixel quality, since that is the hardest to define. Pixel quality can range from 0 to 1. Quality 0 is visually meaningless image data such as pure noise, gross blur, etc, with a signal/noise ratio of 0. No matter how many quality 0 pixels you have, you cannot discern a meaingful image from them.
Quality 1 is perfect; each pixel is as good as it can possibly be with regards to detail, resolution, etc. If you take a MFDB capure made at base ISO with optimal exposure, focus, lighting, and post-processing, and size that image down to an 800-pixel TIFF, the resulting pixels will have a quality level very close to 1, perhaps 0.997 or something like that.
If you look at the images in
this ZIP file, you can see a demonstration of this. The first image (0) has a pixel quality very close to 1; it was downsized from an image 8x larger in linear dimensions, so each of its pixels are derived from 64 source image pixels. The number of pixels is the only thing limiting the quality of this image.
Image 2, in contrast, has 4x the pixel count of image 0, but overall resolution and detail are practically identical. The noise level is such that pixel quality is ~0.25, or 1/4. Since the pixel quality of this image is 1/4 that of image 0, it needs 4x as many pixels to match the overall image quality.
Image 4 continues the progression. It has 16x the pixel count of image 0, but double the noise level of image 2, so it has the same overall image quality as both 0 and 2. So it has a pixel quality of ~0.0625, or 1/16.
And then there's image 8. It has 64x the pixel count of image 0, and twice the noise of image 4, so again it has the same overall image quality. So pixel quality is ~0.015625, or 1/64.
Different individuals' tastes will come into play at this point, but once pixel quality gets down to about 0.25 or so, the noise or whatever other image artifacts are causing that quality loss starts being noticeable in decent-sized prints ("decent-sized" meaning one sends the full-resolution file to the printer and the resulting PPI is <300-360). So if noise/grain is not part of your creative vision, and you want to be able to make decent-sized prints of your work, then you need to maintain a pixel quality of ~0.25 or greater.
If the above conditions apply to you, then measuring DR with any methodology that nets a pixel quality <0.25 is invalid for you. Measuring DR using the B&W text as your reference, or shooting the chart full-frame (or worse, both!) will measure DR at a pixel quality level far below 1/64 (0.015625) and the noise levels will be intolerable for real-world images. If you were to shoot a portrait for a paying client with the noise level present in the underexposed shot you posted at the beginning of this thread, do you think the client would be pleased? Would you even be able to recognize the client's face? I doubt it, on both counts. So does that test correlate meaningfully to real-world shooting conditions? No. In contrast, if you were to reshoot the test as I specified, composed with the center square 100 pixels wide, if you choose the resulting frame that most closely matches image 2 in the ZIP file, the noise level you'll find in the image overall would be acceptable to most people, even when printed "decent-sized".
I recognize that not everyone has the same noise tolerance, which is why I put various sizes of text in the chart. Each successively larger line of text corresponds to an additional 1/2-stop of noise, so if you like lots of noise in your images, use the third-smallest line of text instead of the smallest for your legibility threshold, and you'll get a DR reading that is 1 stop greater than if you were to use the smallest line. If that methodology works for you and your style, great, but at least there is a common baseline for comparison for someone with a more stringent noise tolerance than you, and both of you have something to work with that accurately reflects the results you get shooting in real-world situations.
Using the legibility test of the lines of text in the quadrants as a guide (center square 100 pixels wide), the approximate corresponding pixel quality values start at 1/4 for the smallest line of text, and for each successively larger line, pixel quality is divided by 2. When conducting the test, have other elements on the composition so you can evaluate what pixel quality level is the minimum you or your clients will accept. If you decide that your minimum acceptable pixel quality level is 1/8 instead of 1/4 based on pixel peeping at 100% or printing the test images "decent sized", there is nothing wrong with that. As long as you include that in your result, then I can easily correlate that to
my work and tastes, and have a good idea if your camera is suitable for my purposes. At least there is a standardized methodology involved so that the results achieved by different testers with different preferences can be meaningfully compared.