This is a weak point of your way of measuring the DR, as it is a weak point of all customary measurements: the result depends on the light source.
IMO the pixels need to be viewed on their own, without demosaicing and without white balancing, so that the result be independent of the light source.
For example I see the shot of a Stouffer wedge, which is a standard tool for measuring the DR. The wedge is originally color neutral, but it looks greenish without white balancing on that given shot, due to the composition of the light and to the spectral responses of the filters.
When looking at it closer, it turns out, that the greens are clipping at least 1/2 stop before the blues, and even more ahead of the reds.
At the other end, looking at the steps, the blue and red pixels are making the result noisy on steps, where the green pixels are "clean".
Consequently, the DR would be different probably by more than one stop in other lighting, or with a magenta filter.
I have to agree with you in all you said, although at the same time I think this kind of subjective tests are useful as they add something a more objective DR measure lacks: practical results for the photographer.
It's too bad that we cannot have the same scene, with the same exposure in the highlights, for all those camera models. That would be really great to make more objective comparisions.
BTW I am planning to write a program to do exactly what you claim: it would be fed with a pile of RAW shots over a uniform card at different exposures. It would then open them without demosaicing (native 12-bit or 14-bit or wathever range) and would calculate for each linear f-stop and each RGB channel the Signal to noise ratio.
But I have a problem here: I have checked that even shooting with my 300mm, over a uniformly lighted white surface, and setting focus to the closest distance so that the image gets smoothly blurred (just to make any deviation over the mean level be due just to noise), I get some micro-vignetting, so I have problems to calculate what SHOULD be the true signal value with the entire image MEAN.
What would you do to properly calculate the signal value in the SNR equation?
I have thought of 2 options:
1. Restrict the calculation to a small patch centered in the image (let's say 100x100 pixels).
2. For every pixel (x,y) location, calculate the mean value of the signal in a surrounding circle of certain radius, and consider that the true signal value for that particular pixel.
What do you think? Panopeeper and anyone else of course.