Apps optimized for retina displays work the way I described. A looks like resolution chosen in the monitor preferences on a 4K display as 1920x1080 will let the app display pixel data at 3840x2160 which maps perfect to the psysical screen. If a better looking (in terms of font size) scaling is chosen at 2560x1440. Then the graphics will be rendered into a virtual screen space at 5120x2880 (which happens to 5K) and then scaled to the physical screen at 3840x2160. This creates the blur. This is how Lightroom and Photoshop and all graphics apps I use work and therefore the 4K screen is not ideal as I have mentioned. It seems that Lightroom or Photoshop could write directly to screen pixels at least as mentioned under advanced optimization techniques but this is not used. You can see this when doing a screen shot which dumps the virtual screen and not the scaled one. I have written about this before and nobody from Adobe has commented on this and the issues of the OS X scaling for high resolution displays are implemented.
What is the basis for this statement "If a better looking (in terms of font size) scaling is chosen at 2560x1440."?
Not trying to be argumentative here, but to understand what you are saying.
If I had a 4k display and Apple (only) does 2x2 scaling, I would set my "desktop" to approximately 2000x1000 "pixels" ("points", I believe, according to Apple). I would assume that fonts of height 8 pts then would cover 8/1000 of my display height, rendered as 16 physical pixels.
If I had a 5k display, I would set my desktop to 2560x1440. I would assume that fonts of height 8pts then would cover 8/2560th of my display height, rendered as 16 physical pixels.
Effectively, for a given display size (e.g. 27"), a 5k display would give me smaller text.
I would assume that when high-end image editors shows an image, it would be directly mapped to physical screen pixels.
I don't see any other sensible way this could fly?
-h