Well maybe my idea is not possible but I don't understand what the issue of dynamic range has to do with it. It is already an issue now but cameras still determine an average reading based on the 18% gray measurement. It seems to me it would still be the same if the camera was able to determine if a scene was 7% or 40% gray. Also, when you say it is the responsibility of the photographer and not the camera in any case, what do you mean?
Think for a moment about what you're saying, and how ignorant it sounds. You have a 10-stop scene you're trying to photograph. Your camera can only capture 7 stops, maybe 8 if you're really good at noise processing. What do you keep, and what do you throw away? The answer to that is a compositional and artistic judgment call that no computer can calculate; it must be decided on a case-by-case basis. Here's an example where I made the decision to let the shadows clip to black, and retain highlight detail:
And here's an example of the exact opposite:
In the first photo, the most important element of the image is the subject of the portrait, and if the background is OOF and clips to black, that actually strengthens the image, as it focuses attention of the viewer on the subject of the portrait. In the second case, the primary subject is found in the darker tones, and the best option is let the sky blow out and retain some tonal range and detail in the carved faces. Computers are totally incapable of making meaningful value judgments in situations like these; any auto metering program is going to screw up one or both of these situations. It's YOUR responsibility as a photographer to decide what's most compositionally important, and choose where you need to keep detail and tonal range, and where to let things clip to black or white.
If you're not capable of making those sorts of decisions about your images, then you should forget about photography and take up needlepoint or something.