Is it any harder than Apples approach to retina displays? Legacy APIs are hard-wired to a virtual low-rez display, and the OS will silently upscale content to the physical resolution. A new API is offered to those interested in accessing the full capabilities of the monitor.
I'd say it's harder. It's one thing to alter scaling, colour I think is more complicated, as there's no single answer, like "scale everything up by a factor 2" or whatever.
Adobe & Co would have to make some adjustements (best-case: flip a bit "declaring that they know what they do", then recompile). All legacy applications would be sandboxed in the sRGB assumption (that is pretty much the standard outside of pro/enthusiast photography anyways).
Yes, but how would you know what was a legacy application? If an application developer is concerned enough to set a colour legacy bit, they probably already do colour management.
And "All legacy applications would be sandboxed in the sRGB assumption" probably wouldn't work for several reasons. The most important: you can't assume that "legacy" applications don't do colour management. Some will be using WCS (Windows Color System); Windows could probably detect that and not do any further mapping. But some applications will do colour management internally without using WCS, and there's probably no way Windows can tell that. Applying a "sandbox sRGB assumption" would be completely wrong in that case.
The issue is: for "legacy" applications (i.e. all applications up to now), there's usually no way Windows can know the colour space of graphic information, or even whether it's a photo.
There might be some problems with applications that access display hardware at a really low level (below what the OS is willing to mess with). So applications that are color-unaware and write directly to GPU buffers might be rendered (erroneously) at full native display gamut.
And that's another problem: typically games and video players may bypass Windows - but not necessarily for all display material. So you'd have information where Windows is trying to second-guess the colour, and mapping it, side-by-side with information that it can't map. For example, what should look like a continuous red bar might be two different shades of red, part re-mapped by Windows and part not.
There is a question of what the OS ought to to if there is a color-unaware video player showing e.g. youtube content in one window, a color-aware photo editing application in another window, and the display is reporting two calibrated presets: 1)sRGB, 2)Wide-gamut. Should it inject sRGB->Wide-gamut conversion for the video window, use the accurate sRGB-mode of the display, or what? The easiest way out of such issues may be to only change behaviour for fullscreen applications.
I myself would probably be happy if display presets could be selected from the OS (using USB, EDID, whatever), and the OS allowed me to select which display preset should be selected depending on what application was highlighted. Then I could ensure that native wide-gamut was always used when I was using Lightroom, and sRGB emulation always otherwise. My family members would probably rejoice.
The problem is that Windows just doesn't know what's colour managed and what isn't, and doesn't know the colour space of information being written to the screen.
By the way, EDID colour space values read from a monitor are often wholly incorrect. Some monitors return the values of the sRGB primaries in the EDID - even for wide-gamut monitors. Edited to add:
The problem is simpler for browsers, as untagged elements (without an embedded profile) are nearly always sRGB. Anything that isn't sRGB will almost invariably have an embedded profile. Windows, however, usually has no idea of the colour space of stuff written to the monitor.