I understand how sensor shift works. What I don't understand is how "super sampling" say a 1980's era lens will get higher resolution, actual detail from the lens. To me the maximum resolution is still limited by the optics.
On a side note, I don't always care about sharpness. I've used some very cheap lenses for floral photography and they render beautiful soft gradations.
So, with sensor shift, can you really ernd up with a much sharpewr image from say an old Nikor 28mm f 2.8 lens?
I wouldn't describe The R>G>G>B pixelshift as super-sampling or over-sampling, the Bayer sensor/filter concept is more a kind of subsampling on color. You still need the lens resolution to address the individual pixels but the color information is limited (not considering the anti-aliasing sensor lenses). If the sensor resolution would be described per cell of RGGB pixels (or AA lens) we would read way lower sensor resolution figures. The dilemma Fovean had to deal with to get realistic figures for comparing both systems. R>G>G>B pixel shift is a 1:1 sampling on Bayer sensors, like a monochrome sensor + R>G>B filtering in time is a 1:1 sampling for RGB color. Alright, there is possibly some Green oversampling done in the first example.
Olympus adds oversampling on top of the R>G>G>B pixelshift with 4 extra steps halfway the pixel pitch, Pentax doesn't. Olympus does not deliver more than 64 MP in the RAW file (sensor 16MP) and concentrates the data even more in the JPEG output. Like with scanners that oversample, there is a gain on the signal/noise ratio which translates in better information. A solid base for upsampling and sharpening if this output is not directly expressed in sharper images.
I do not advocate the use of low quality lenses but at a point either the sensor or the lens out-resolve one another. With all sensor and lens tests we see the warning that they should be compared with one lens or one camera system. Enough lenses of the analogue past out-resolved the first digital sensors, that is hardly the case with today's sensors. Imaging Resource used the Sigma 70mm macro a long time for camera tests and it out-resolved a lot of sensors in the past. If the lens is optimal for a given Bayer sensor, both not out-resolving one another, then pixelshift R>G>G>B with that combination will still deliver more information than a single exposure with that system. Olympus' over-sampling can add information on top of that. Take the same lens and put it on a Bayer sensor with half the pixel pitch so 3x or 4x the resolution and you will not gain the same information with a single exposure, a lens with more resolution is needed. That is what I had in mind.
We have seen the pixel shift used in MF digital backs to compete with analogue large format and digital scanback resolution figures. It is introduced in Micro 4/3 as the sensor size limits higher resolution numbers, not to mention few lenses in the Micro 4/3 systems can actually keep up. Panasonic announced a 20MP M4/3 sensor recently so I wonder what it will improve. The first APS camera has pixel shift and I expect some FF cameras will follow. Back illuminated sensors will improve with a wider dynamic range, better low light specs or more resolution for these sensor formats too but there is an end to it. Either sensors hitting physical laws on visible light or lenses becoming too expensive to keep up with the sensor resolution. Decades later than the wafer steppers hit on similar conditions and they go beyond the visible spectrum, the cameras we use can not. Yes, this is speculative. On the other hand some years ago I did wonder whether mechanical aspects of DSLRs or mirror-less would not limit further pixel pitch shrinking as the vibration goes beyond pixel pitch. Canon and Olympus had to deal with that aspect if you read in between the lines of recent DPreview camera reports. Sony encountered the same issue.
On the use of (old) soft portrait lenses on high resolution sensors. There is no law forbidding it, there are tastes, there are conventions and any print that satisfies the viewer makes this whole thread pointless anyway. Upsampling from lower sensor resolutions with algorithms that deliver soft images has some analogy with the use of said portrait lenses though.
Going back to the subject line. I estimate that in best case; optimal inkjet paper coating, smallest ink droplet, best droplet addressing, the required PPI at printer output stage will not exceed 450. Bart van der Wolf's print target could add the right numbers. That said, I expect that if one starts from 225 PPI quality pixels and use best upsampling routines and sharpening it will be damned difficult to make the distinction between the same size prints made from either starting point. Upsampling on the 450 PPI data not translating in a better print, the information then simply out-resolving the printer/paper combination. The numbers falling back to 300 PPI and 150 PPI with heavy textured, matte papers. The best camera + lenses today will exceed A2 format data requirements with a good workflow on a 3.5 picoliter droplet printer, no printer model that can print A2 has smaller droplets than 3.5 picoliter. Starting from 50 MP quality pixels, with 150PPI input, sizes of 2 M2, 22 square feet on heavy textured matte paper will still be acceptable.
Few people start from the print size they use in practice and decide then what camera quality is needed.
Met vriendelijke groet, Ernst
http://www.pigment-print.com/spectralplots/spectrumviz_1.htmDecember 2014 update, 700+ inkjet media white spectral plots