I didn't express myself clearly. I meant to suggest editing an image in a "too-dark" room, editing the same image in an "adequately-lighted room" and then examining them side-by-side in a well-lighted room to see if there is a discernible difference.
There's certainly color science to back up that concern. It's well accepted in setting up video viewing environments that the gamma of the display should be increased as the viewing environment gets darker. What this is actually doing is applying a viewing conditions compensation due to the difference between the video encoding gamma (about 2.2) used for material captured in bright environments and the display gamma (somewhere between 2.2 for bright environments, 2.3 for dim, and 2.4 or even more for very dark environments).
Poynton covers this in some detail.
The converse of course, is if the display is an input device due to it's being used to judge image edits, and in this situation the viewing environment difference between the expected image encoding and the actual display viewing environment could cause a shift. It depends a lot on how the display is setup though. If the display calibration & profiling ignores the viewing environment - for instance if the display was assumed to be pure sRGB response and the viewing environment is actually darker than the one that sRGB assumes - then adjusting the images to be too dark would be a danger. If the profiling was to take the viewing environment into account though, then this danger can be avoided.