thank you for your replies. yes i discovered that i have to focus in the 'middle' of the subject field i want in focus and then obsess over focus, iterating focus points. It seems this 'manual' pc-e process is a relic from the past - almost like trying to frame and focus a subject on a view camera. But because everything is shrunken down, it's almost impossible with human hands to adjust the tilt to be just right. The tilt knob is small and the resistance makes it hard to make delicate adjustments. Unlike the focus barrel which is grasped by the whole hand allowing greater leverage. I would have thought they could have an augmented reality interface to automate this process - you know a grid superimposed on the viewfinder that you can manipulate through the joystick. Amazingly, in Capture one there is a 'focus mask' that can tell what is and what is not in focus and lights up in green. Even something like that would GREAT to have in the viewfinder. Additionally, by having a floating lens element that can be adjusted along any axis, you could eliminate the need for rotating the lens. You could specify a focal plane angle and orientation in 3 space and let the autofocus servo motors do the rest.
I am not liking the pc-e as much as i thought i would have. The need to iterate the 'focus and inspect process' over multiple areas (often the same points) in the image starts to feel like drudgery. It feels like the camera is making demands on me beyond what is necessary. Esp when depth of field is so good at f8 on a wide angle. And now with AI sharpen to computationally enhance focus in an already focus-stacked photo.
But like anything else, I imagine that when you get the hang of it, a lot of the 'cognitive load' of the process becomes automatic. It's just that the real life conditions often do not provide the luxury of the needed time to do all this for one image (changing weather, changing light, crowds,traffic,etc).