Bernard,
I don't know how long you have tried using an EVF, but I doubt that most users start out actually liking them. At first, the EVF is just something you try to accept and ignore all the differences and faults as you use your new CSC.
It's not until you pick up your DSLR again - and maybe this has to happen several times over the course of several months - that you slowly realize that somehow you have grown to not only get used to the EVF, but you now actually prefer it. Now it feels outdated to use OVF, how did that happen?
Actually, you might have the same surprise at now preferring the CSC altogether, even with compromises in some functionality. The DSLR now feels oddly outdated, too. 😀😀
This is absolutely my experience too!
It is just the same as when I first encountered a touch screen on the back of a camera. I thought it was nonsense, always going to be touching the thing with my nose, changing settings. Now, after years of iThingies and the GH4, I find myself poking at the Sony A7RII screen and being surprised when it doesn't autofocus on the spot I just touched.
My point, way up in the thread, is that an EVF allows possibilities than an optical viewfinder doesn't. In fact it allows possibilities than an optical viewfinder CAN'T... like controlling the brightness of the view. Or showing zebras for exposure, live and superimposed on the image. Or using the actual sensor to autofocus and showing you the resulting image, thus removing the need for focus shift adjustment on a lens by lens basis. Or zooming in live to magnified view for manual focus.
The view is clearer and more detailed and with less lag through an optical viewfinder. But that's about all it has going for it, because the system simply cannot provide a lot of those other features. Whereas the limits of the EVF system are more amenable to technological development: resolutions are getting much better year on year, and the lag imposed by the readout and display chain is amenable to faster processing.
Human visual response time is really not that big a deal: it's usually reckoned around 190 milliseconds. Most people perceive 24 frames per second as continuous motion so long as we're not panning too fast (~40 milliseconds per frame). 50/60 fps is already available on EVFs; 120 fps on commercial video monitors. So we know how to process the images fast enough.
The latency (the time delay between the stream of images hitting the sensor and the display of the processed images in the EVF) needs to be reduced- but millisecond-scale latencies are entirely in the realm of the possible. That's routinely achieved in scientific detectors, for example.
It's not like we need nanosecond response times or frame rates in the tens of thousands.
One order of magnitude change in each variable would get us to the point where the EVF vastly out-performs even the most discerning of humans. Actually a factor of two would probably do it.
Leica are already claiming the SL viewfinder has lower latency than human vision. Although since they also claim the SL is "the world's first camera conceived for professional photography to feature an electronic viewfinder", I smell marketing BS.... (Thanks Leica, but I'm pretty sure Olympus and Sony et al intend their cameras for professional photography too). Nonetheless, we're close enough for people to be claiming it already.
The problems are engineering ones. Shrinking and merging these technologies into something that will fit into a camera, will run cool enough to not melt sensor/readout chips/human face looking through EVF and costs a sensible amount. The tech is coming.
But most of all, it is hard to go back to the analogue OVF when you're used to all the facilities an EVF provides, even a current-generation EVF.
Cheers, Hywel