This is interesting. Flash forward to 20 years from now, and a non-photographer with their latest 45 mp pocket camera snaps a series of photos at an event that becomes significant for some reason, but was not considered significant ahead of time, and so no serious photographers showed up. Thanks to the latest software wizardry from Adobe and others, some of these photos get rotated, cropped, color processed etc. so that they (some of them) look pretty good to the critical eye of the time, allowing for increased noise etc. due to the extra processing.
The description entangles "importance of the event" with "quality of the photograph." A "recognizable" unique image of a critical event will have great value regardless of overall pictorial quality, e.g., Zapruder film of Kennedy assassination, the grainy images of Armstrong's first step on the moon, and so on. But I don't think that either "rarity" or "contingent importance" (e.g., having captured an important event) was intended as one of the features that would distinguish an aesthetically "great" B&W photo.
So this is actually a question rather than a statement -- how do you see the final result (i.e. what I can extract from an original capture using software tools) as compared to photos that didn't require that processing?
Don't get me wrong - for a given image that can be well planned with good equipment, the planning and equipment make an obvious difference. But what if you wanted to assign a percentage of importance to the various aspects of a series of photos, like a judge in a photo contest? Don't you think it would be better to separate out the entrants based on some pre-qualification, so you're not comparing a noisy beginner photo that happens to look good otherwise due to smart post-processing, to a professional photo that you can see belongs in a different category?
Wow, you go a long ways in a few sentences...
First couple of sentences really are answered by J Payne: "With my eyes." Or maybe with my entire history, experience, training, depending on your pet theory of art crit. Alternate statement that means the same thing is: "The image stands on it's own." It does not matter whether it was processed a lot, a whole lot, or an ungodly amount (all digital images are processed a lot before there is *anything* to see, it seems rather arbitrary to decide that cropping/rotating/compositing/post-print shredding is "too much") -- all that matters is the final image.
Latter question about judging raises a separate issue of "level field of competition," separate from artistic/aesthetic merit. The reason for dividing entrants into various classes is a social conceit: It's to give beginners, advanced amateurs and such classes individual attention and reward, either to encourage more entrants ("let's get everybody involved - you might win in your age/experience/hair color group"), and/or to give more specific criticism appropriate to experience level. We praise little kid's pics a bit more to keep the kids encouraged, we pick the tiniest nits in pro's pics to push them to do even better work and all levels in between.
You consider comparing "beginner photo that happens to look good... to a professional photo" -- but that's an irrelevant factor in judging the aesthetic quality of the image!
Judging "the best photographs" should be entirely without any reference to photographer/camera/processing software or technique. Somebody's very first photograph may be (accidentally?) brilliant -- "best of show" -- by some phenomenal bit of luck. (Yes, there was the valid observation eariler that many people will alter their opinion of the photo when they hear it's famous or by somebody famous -- much different discussion.)
All of which now comes back to the original question: If we are not concerned with the contingent event, nor with the biography of the photographer, nor with the camera/processing/printing details, how do we make that "best in show" judgment? That is, what makes a good/great photograph?