Not entirely correct. The difference lies in quantisation.
With a tilt-shift lens, the stretching is carried out before the light hits the sensor. And it's not really distorting the light in any case, since it's a retrofocus lens - the light actually hits the sensor at angles similar to what comes out of a 50mm lens, not at extreme oblique angles. Therefore, when you shift to each end and stitch on a 36MP sensor, you end up with the full 60MP of detail
That's including vignetting and edge of image circle deterioration. I wouldn't call that 'full detail', but it does sample the light (with all its flaws) as it hits the sensor. The angle of incidence may be not extreme, due to the retro-focus optical design, but the incoming rays (entrance pupil) do cover a wide angle. Just look at the examples I posted, the extreme corners ('only' 106-107 degrees diagonal coverage) are 'smeared' and stretched and relatively underexposed on an ETTR exposure setting.
With a rotational method, you capture the image first, quantising it into pixels, prior to any stretching. When you stretch it in post-processing, you lose a lot of these pixels; the wider the virtual lens, the more you lose.
As explained, the pixels start at a much higher quality (especially with a proportionally (portrait orientation) longer focal length that can be used for the same or wider FOV) before resampling.
For the equivalent angle of view of the TS-E 24L (equivalent horizontal angle of view to a single shot from a 14mm lens on full frame) this stretching is considerable. For the equivalent of the TS-E 17L (equivalent horizontal angle of view to a 10mm lens on full-frame when stitched) it's even greater.
Only if the 'wrong' focal length is used in a direct comparison.
Here is an example, at too short a focal length for proper comparison (it should not be 24mm but 35mm or more) of the
un-shifted center of image circle image quality with only basic Raw conversion and minor Capture sharpening without CA correction (click on the image for a full resolution version, full size=9,826,078 bytes!):
The resampling to a corner tile of a pano stitch will only stretch the pixels for the geometrical projection of such a wide angle on a flat plane. Alternative projection methods, requiring even less geometrical 'distortion', can also be used if the image content allows.
The only way around this with a rotational panorama is to shoot using a significantly longer lens and start with a higher-resolution image. Then, once you stretch it, you still retain the same effective resolution (pixels per degree) in the corners.
That's what I already explained earlier. It would be useful for a discussion to illustrate it with an example. I'm a bit busy right now, or otherwise I'd have done it already. Maybe you have something concrete to discuss (although we do not disagree about the principle, but maybe on the effective practical consequences)?
Cheers,
Bart