Luminous Landscape Forum

Equipment & Techniques => Digital Cameras & Shooting Techniques => Topic started by: Quentin on July 17, 2005, 05:43:39 am

Title: Is This The End Game?
Post by: Quentin on July 17, 2005, 05:43:39 am
Quote
For those deeply in the topic I recommend the scientific study
over Bayer array based digital capture vs. film, on both
resolution and dynamic range, at ClarkVision.  You can start
from the equivalent digital resolution chart at:

http://clarkvision.com/imagedetail/film.vs.digital.1.html (http://clarkvision.com/imagedetail/film.vs.digital.1.html)

which states Michael's projection that the P45 would provide
scanned 8x10 film quality highly improbable.

Leping Zha
Landscape Photogrpher and Ph.D. in Physics
www.lepingzha.com
Yes, but in my opinion this comparison is a load of bull.  

Quentin, BA (Hons), MSc, MCIArb, CNI, Freeman of the City of London and so on (all genuine)...  )
Title: Is This The End Game?
Post by: BernardLanguillier on July 18, 2005, 04:23:26 am
Quote
I'd rather wait a couple years, save the kidney, and get it for 10K. Prices are dropping fast on high end digitals it looks like. Well, maybe not that high end, so you got me there. But at least for the DSLRs of the professional qualities that were 6K two years ago.
Besides, other manufacturers will probably release soon after Phase One backs based on the same sensors at significantly lower prices...

Mamiya being a good candidate... providing they manage to release the first generation to start with... :-)

Regards,
Bernard
Title: Is This The End Game?
Post by: Mark D Segal on July 18, 2005, 04:42:28 pm
I agree with those who say that nothing counts more than actual experience and real prints of real photographs meant to be enjoyed (i.e. this eliminates line charts). I believe the weakest link in the digital chain is now the color gamut of ink jet printers when using archival pigmented inks on matte surfaces - eventhough that too has improved by leaps and bounds over the past several years - but there is more to go. As for lenses, used at the their optimal f/stops, one doesn't need lenses costing in the thousands to get EXCELLENT visible resolution with film or digital.

I can get very acceptable results from film and from digital, and I am working on both at the same time, but digital wins hands-down - to the extent my "goal" is to get my prints from color negatives looking as clean, sharp and well balanced as my prints from a Canon 1Ds. Quite a challenge. Let us compare the workflow for ROUTINE images that require the least amount of post capture repair work:

Film:
-clean the negative
-do a pre-scan
-fine-tune the scanner software till the image seems OK
-do the final scan
-open the image in photoshop
-apply Neat Image to get rid of the grain (even 100ASA film)
-careful spotting to get rid of all the crud step one failed to achieve;
-use PK Capture Sharpen to rescue lost acutance from above steps;
-fiddle with color balance and contrast in Photoshop (scanning software never perfect);
-final sharpen and print.
(Total Time: about 45 minutes to an hour per picture).

Digital:
-download the flash card;
-adjust and convert in camera RAW;
-PK Capture Sharpen;
-a contrast tweak in Photoshop;
-set image size for printing;
-final sharpen and print.
(Total Time: about 2 minutes, or less).

Comparing results, the 1Ds will still outperform the most carefully executed film scan using slow color film that has excellent shadow detail and a Minolta Dimage 5400 PPI scanner. I know this is an old story by now, but from my personal, immediate - NOW for NOW experience - that's just how it is.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 03:16:18 am
Quote
Quote
The lack of detail in some white checks would, to me, indicate a bit of overexposure.  
Jonathan! Overexposing an image! Do you realize what you are saying? Is this even conceivable   .
Actually, that image is about 1/3 of a stop over what I would have ideally preferred; the ACR conversion exposure setting was -0.65, while I prefer it to be about -0.30 for the absolutely best results. But most of the clipping cropped up when converting to sRGB for web display, which is why I prefer 16-bit ProPhoto or ARGB for printing.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 08:25:30 pm
Quote
Quote
That's where the quibbling starts to crop up; information and data are not quite the same thing. Data can contain information, but if there is less information than data, tha data can be compressed down to approximately the size of the actual information it contains.
Jonathan,

You are correct, but this distinction isn't really relevant to the discussion at hand, is it? The same gap between data and information will theoretically be present on Foveon and Bayer sensors, right?
No, it is relevant; Foveon sensors generate 3x the data of a Bayer sensor with the same pixel count, but not 3x the actual image information. So the gap between data and information is wider with a Foveon sensor than a Bayer sensor; a Foveon sensor outputs 300% of the data of a Bayer sensor, but only 130% (approximately) of the actual image information. That is why a Foveon sensor will output more detailed images than a Bayer sensor with the same pixel count, but they're not three times better than the Bayer image. As processing and interpolation techniques improve this gap will narrow, but never quite close.
Title: Is This The End Game?
Post by: jcarlin on July 21, 2005, 12:31:29 am
Bernard,
Jonathan mentioned information theory and I just thought that would complete his though mathematically. The mean error introduced from the bit quantization that happens after a given operation is

E = (D^2)/12

where D = 1/(2^n)
where n is the number of bits.

This means that the mean error from one 8 bit operation is ~64000X greater than the error from one 16 bit operation.  In practice they’re both pretty small.

Also when comparing Bayer vs. Foveon information & data earlier posters correctly pointed out the difference between information and data.  The fact that there really isn't an RGB value for each pixel in a Bayer array and the fact that we can produce perfectly usable images from them points directly to the fact that a Foveon array probably doesn't produce that much more data.  In practice determining exactly how much more information a Foveon array delivers could probably be figured out by looking at the assumptions behind interpolation.


John
Title: Is This The End Game?
Post by: med007 on July 25, 2005, 04:01:09 am
Deleted by Asher
Title: Is This The End Game?
Post by: Jonathan Wienke on August 16, 2005, 11:51:18 pm
Digicams are alreeady pushing the limits of smallest usable sensor pixel size; the pixels in current 8MP models such as the Sony F828 are about 2x the wavelength of light. IMO this is one reason why the megapixel race in that category has slowed; to pack in more pixels requires a physically larger sensor, with the attendant larger lens, all of which increases cost and weight. If you do that, you might as well upgrade to a DSLR. Just as an interesting theoretical exercise, a 60MP full-frame sensor (~9487x6325 resolution) would far enough beyond the cababilities of any currently available glass as to not need an AA filter, and could still have a pixel pitch of 3.794 microns. If you downsampled to 15MP in-camera, you'd have enough input pixels for each output pixel that color interpolation artifacts would be pretty much non-existent, and noise could take a pretty sharp drop, too. I'd say that would likely represent a practical upper limit for a 24x35mm-format camera unless there are some really amazing advances in lens technology.
Title: Is This The End Game?
Post by: BJL on September 09, 2005, 06:46:07 pm
Quote
Photonic noise is the square root of the total number of photons impinging upon the photodetector. Of 16 photons impinging upon the 8 micron photodetector, 4 will be noise. Total noise for that area of sensor is 25%.

If we cover the same area with 4x4 micron photodetectors, each photodetector will receive 4 photons, two of which are noise. Total noise for that area of sensor is 4x2=8 photons. Ie., 50% noise.
Wrong or at least irrelevant because you ignore that noise is a mixture of positive and negative variations around the "true" value, so that when signals are merged, ther is some cancellatio of positive and negative noise values, so that total noise increass less than in proportio to the number of signals combined. For the common and simple case of uncorrelated noise, noise levels combines in root-mean-square fashion, and so total noise grows as the square root of the number of values combined.

Let me rework your example of using eithe one big photsitre of four smaler photsites to gather light from a given part of the image, and then combining (binning?) the four small photosite signals to get the same resolution as the big photosite sensor.

Say the larger photosites receiving light from a subject of certain illumination level should gather 16 photons, but due to noise, the resulting electron count is "16 plus or minus 4", meaning that various photosites receive an average of 16 photons each but with fluctuations above and below that value, of standard deviation of sqrt(16)=4.

If the subject is instead photographed with a sensor whose photsites are one quarter the area, each will give an average count of four, with standard deviation sqrt(4)=2.  If the signals from the four small photosites covering the same part of the subject as one big photosite are combined (binned), the average signals simply adds, to total 16, while the four standard deviations (noise level) of 2 combine as follows:
sqrt(2^2+2^2+2^2+2^2) = sqrt(16)=4,
EXACTLY the same signal and noise standard deviation as if you had used one bigger photosite to start with.


Actually, this fancy mathematics is not needed in the case of photon noise, which you should remember is variation in the light arriving at the sensor, not something cased by the sensor itself. Clearly, whether you prodice each "big pixel" by counting the light arriving at a certain part of the sensor with one big photosite, or use four smaller photosites and then combine the totals into a single output "big pixel value", the total light received will be the same, and thus the variation between neighboring big pixel values will this be the same.

Thus as fas as photon nouise goes, aggregating data from more smaller photsites given teh same S/N ratio as if fewer larger ones were used.
This also works if the aggregation is done by printing the smaller pixels at higher pixel density to get the same image size, and viewing from a distance at which the lower pixel count image is not visibly pixelated: the smaller pixels will then be too small to resolve, and so get visually averaged by the eyes. Conversely, if the smaler pixels are big enough to resolve, their worse per pixel noise might be detected, but the alternative evil with biger pixels is visible pixelation!
Title: Is This The End Game?
Post by: Gary Ferguson on July 16, 2005, 10:43:22 am
News of a 39MP medium format back makes me wonder if the digital growth curve, at least in terms of pixel count, is starting to flatten out. I'm finding that with a Canon 1Ds Mk II it's the available wide-angle lenses and my ability to hand hold that's the limiting factor, not the sensor.

Anyone any thoughts? Are we approaching the pixel count end game, and where's the practical limits for 35mm and MF?
Title: Is This The End Game?
Post by: Jonathan Wienke on July 17, 2005, 02:03:03 am
Quote
For those deeply in the topic I recommend the scientific study
over Bayer array based digital capture vs. film, on both
resolution and dynamic range, at ClarkVision.
Clark's spouting the same old silliness that film snobs have been using to claim that film is better than digital, which have been thoroughly refuted in practical experience for several years now. He claims that 16 Bayer megapixels are needed to match 35mm Velvia slides, which is rather easily debunked. The 11MP 1Ds is capable of matching or beating the best drumscanned 6x7cm medium format film images, and the 16.7MP 1Ds-MkII is even better. (See this article (http://www.luminous-landscape.com/reviews/shootout.shtml) for a direct comparison.) If 11MP can hold its own against 6x7 film, then obviously it is easily capable of surpassing 35mm film of any persuasion. This is not theory, but the direct result of observation and comparison, and in my case, shooting over 50,000 frames with my 1Ds. Comparing the overall image quality of those frames to 35mm film shot through the same lenses with a film body attached, it's obvious that the 1Ds is far better than the 35mm film in every respect.

Good science can accurately predict the results of real-world applications of theories and premises. Clark's science (or at least his math) is severely flawed because real-world comparisons between digital and film deviate dramatically from Clark's claims and predictions. He's not an example of a credible source of information or good scientific analysis..
Title: Is This The End Game?
Post by: on July 18, 2005, 08:15:56 am
Folks...

The point that Johnsathan makes is quite correct.

Each grain of silver in a fine grain film is 1 to 2 microns in size, while a digital sensor may be 5-8 microns. One would assume from this that film can outresolve digital.

But, any individual grain can either be on or off, black or white. It takes 30-40 grains in a random clump to properly reproduce a normal tonal range. On the other hand each individual pixel can record a full tonal range by itself.

So a 100% MTF target will be recorded bettter by film (because the edges are either black or white) but with anything photographed in the real world digital's advantage in this respect immediately becomes clear.

It always amazes me when people defend theoretical positions which are clearly contradicted by reality. Working photographers with experienced eyes know what they are seeing, and so do their hyper-critical clients who are paying the bills. When somone tells them that the evidence of their eyes is wrong, all one can do is smile and shake ones head. The sad part though is when people who don't have the direct personal experience to contradict the theorecticians are intimidated into believing them.

Michael
Title: Is This The End Game?
Post by: filmless on July 18, 2005, 04:21:24 pm
Michael posed the question "Kodak or Dalsa?" on the new 39mp sensor. I do not have any information from Phase One but I do know another digital back manufacturer has been testing a Kodak 39mp chip of late, odds certainly would be in favor that Phase One is using the same Kodak
chip.  

Tim Palmer
Capture Intergration
Title: Is This The End Game?
Post by: Mark D Segal on July 19, 2005, 08:52:13 am
Samir, if we must, go to Photodo (albeit 5 years since the last up-dating) and look at their ratings for Leica, Canon and Nikon lenses. The Leitz Summicron 50mm f/2 - an old warrior from decades ago - has a rating of 4.8. There are only 2 Canon "L" lenses and NO Nikon lenses that either approach or equal that rating.

Dr. Zha, with all due respect to your expertise in medical imaging and lab-like testing, which I shall never have, let us confine ourselves to non-medical photographs - and in that context the comments you made in your last post simply confirm what I was saying about the role of post-capture processing. Thank you. To add a bit of insight to this aspect of the discussion, anyone using PK Sharpener Pro will know with that tool and a bit of imagination you can achieve just about any "film-like feel" or any other "feel" in relation to sharpening or softening by choosing the appropriate option and knowing how to move an opacity slider.

As for grain and dust removal at the scanning stage - I suspect that most of the qualities you ascribe to the Imacon in this respect are software related, though never having used one and not knowing much about its underlying technology that is just a deduction on my part. My experience indicates one can use much cheaper scanning solutions and produce scans that minimize grain, retain superb detail, but perhaps takes more time and ancillary software use than what you describe. I do not try to deal with grain and dust at the scanning stage, because I can do it more efficiently and with more process control at the post-scanning stage, not owning an Imacon.

Jonathan - absolutely - unless for effect I also don't see the point of adding noise and grain to otherwise clear, clean photographs. Why muck-up what digital technology is allowing us to escape from?

Going back to the initiating comments in this discussion thread, Gary Ferguson asked a question and Michael took a cogent stab at an answer, which sounds OK until one starts to think about how many such predictions in the past have been stood on their heads. But what could be even more interesting here is for contributors to take a long look down the whole chain of digital imaging from capture to final print and ask what is the weakest link in the chain. Everytime I am about to make a print and I depress "CTRL Y" (Soft Proof) I think I know the answer, because it hits me in the face straight from the monitor, but I'd be interested in seeing what others think about "the weakest link" - apart from the tired and specious comparisons of differences between a 1Ds and a MK2, or a D2x and 1Ds Mk2, and this lens versus that lens, or scanning film versus digital.
Title: Is This The End Game?
Post by: BernardLanguillier on July 19, 2005, 07:13:22 pm
Quote
(1) Foveon (or its senior scientists) have never claimed anything like a 3x resolution advantage for the Foveon sensor over a Bayer. Don't confuse what Foveon fanatics say with what the company actually says.
Hi there,

I know they didn't, but never intended to write that they did.

Cheers,
Bernard
Title: Is This The End Game?
Post by: BernardLanguillier on July 21, 2005, 12:37:54 am
Thanks for the information John.

Regards,
Bernard
Title: Is This The End Game?
Post by: Ben Rubinstein on July 22, 2005, 11:08:05 am
Quote
That means that the lowest-order bits have to be some combination of guesswork and garbage.

Would that be the reason why the underexposed areas suffer from noise and lack of detail and resolution?
Title: Is This The End Game?
Post by: BJL on August 18, 2005, 11:39:07 am
To Jonathan in particular,
   thanks for a lot of useful contributions in this thread, whose end seems not to be in sight.


A couple of comments.

a) I am glad that Jonathan might be joining my club advocating "pushing pixels smaller than lenses can resolve to eliminate aliasing ("oversampling"), then using downsampling or more selective noise reduction processing as needed to control visible print noise."

 that average of 8 bits per pixel of informati content is interesting, and makes sense; for one thing, a typical mid-tone pixel near "18% gray" has three or four leading binary zeros.

c) blurring (AA, OOF) does not necessarily lose information, but sometimes mostly just redistributes it, by averaging over nearby pixels. Thus careful "deconvolution" can rearrange the information back closer to where it originally was: some of the processing Jonathan describes and illustrates is not necessarily guilty of losing information at each step.

d) not all visual data are equally useful to the eyes viewing a photograph, and most of the extra information provided by an X3 (Foveon) sensor is of far less significance than that already gathered with a Bayer CFA sensor. This is because the extra data relate to color information at the extremes of the visible spectrum (red, blue), while luminosity and mid-spectrum (green) is more important to the eye.

e) I think Ray is onto something important about Foveon sensors and noise: the lower two color layers get significantly attenuated light, and all three layers detect a broad spectral mix of colors that has to be "deconvolved" to produce RGB output. That makes the noise problems distinctly worse for a Foven photosite than for a Bayer CFA photosite of the same size.

f) Perhaps we should ignore posters who spout anti-digital cliches, including ones from the old "CD vs LP" wars.


BJL, Ph. D. in and professor of Applied Mathematics, author of various publications in physics/optics journals, and cynic about people who try to bolster their arguments by flaunting academic credentials
Title: Is This The End Game?
Post by: Ray on August 30, 2005, 10:47:25 pm
Quote
I am not at all sure, but I believe that you are partly right; total "dark noise" is dominated by read out noise rather than sensor dark current noise, except in fairly long exposure times where dark frame subtraction becomes useful for cancelling part of the dark current noise.
Well, better than being completely wrong  :D .

I agree. It would be unreasonable to expect read-out noise to be independent of pixel size. There's probably some variation, but it's not proportional. That's really my point.

Perhaps more significant is the increasing role of photonic noise, as a percentage of total picture noise, as the pixel size decreases. Although, again I'll have to rely on your mathematical expertise to confirm this. But as I understand it, an area on the sensor consisting of one 8 micron pixel would be subject to less photonic noise than the same area covered by 4x4 micron photodetectors.

The reasoning is as follows. I'll use unrealistically small numbers for the sake of simplicity.

Photonic noise is the square root of the total number of photons impinging upon the photodetector. Of 16 photons impinging upon the 8 micron photodetector, 4 will be noise. Total noise for that area of sensor is 25%.

If we cover the same area with 4x4 micron photodetectors, each photodetector will receive 4 photons, two of which are noise. Total noise for that area of sensor is 4x2=8 photons. Ie., 50% noise.

Now I'm not sure if this is flawless reasoning. There might well be some other probabalistic theories at work.

The other significant thing about photodetectors is that they are 3-dimensional, hence the analogy of buckets of water when describing electronic charge in photodetectors, or 'well capacity'.

In a situation where the number of pixels on the sensor exceeds the the maximum resolving capacity of the lens, so we don't have to use an AA filter, we have presumably dispensed with microlenses. The naked sensor is covered with just a clear piece of glass. How do the photons reach into the well, which has to be just as deep with the smaller pixel to maintain well capacity, with an unavoidably narrower aperture?

Note: I'm using pixel and photodetector interchangeably because in the Bayer type system there appears to be no distinction.
Title: Is This The End Game?
Post by: on July 16, 2005, 11:43:18 am
Interesting question.

My guess is that 35mm will top out at 22MP, and 645 format at 39MP.

Even today the 16MP Canon 1Ds MKII is pushing up against the performance limits of current lenses. A new generation of digital optimized primes, or high end zooms, could support 22MP, but beyond that the actual size of individual pixels will drop too low, and the laws of physics start to mean that the S/N ratio will render higher resolutions impractical. Such lenses would be cheap, either.

With medium format the same applies, because to go much beyond the sensor resolution of the P45 would make the individual photo sites too small. It remains to be seen, but my guess is that the P45 will push even the superb Zeiss lenses on a Contax to their limits.

That isn't to say that for competative reasons some chip maker might not come up with denser sensors, but given that a P45 now allows us to make 24X36" prints without ressing up, I think we're near the end-game, as you point out.

But there are many other games to be played, including sensativity, on-chip noise reduction and so forth, that the end of history isn't quite yet in sight.

Michael
Title: Is This The End Game?
Post by: Jonathan Wienke on July 18, 2005, 01:48:54 am
Quote
It's a well known fact that, with a good prime lens, a sturdy tripod, flawless technique and the right type of film developed in the appropriate developer to bring out the greatest detail, it's possible to record detail up to 100 lp/mm on 35mm film.

How does this compare with the 45-55 lp/mm limit of the 1Ds?
The catch is that this is only achievable for high contrast subjects like B&W resolution test charts and the like. Real-world subjects require much more film area for the grain patterns to average out to a given color, in much the same way that inkjet printers can achieve extremely high resolution (say 1440 DPI for most Epsons) for text and other high-contrast subject matter, but due to the dithering required to generate arbitrary colors in the midtones, they can't do much better than a 300DPI continuous-tone LightJet for most real-world images. That is ultimately the flaw in Clark's math; the dithering introduced by film's grain structure significantly reduces the effective usable lp/mm for any subject matter that is not high-contrast black & white, typically by a factor of 3 to 5. In contrast, this limitation does not apply to digital capture, except for Bayer-type cameras and highly saturated colors (color interpolation algorithms do not work well when you have adjacent pixels clipped to minimum and maximum values).

If you're shooting high-contrast B&W test charts, Clark's assertions are fairly accurate. But when shooting anything else, their validity is completely compromised.
Title: Is This The End Game?
Post by: Ray on July 19, 2005, 02:03:51 am
Quote
And where were all you res buffs when Kodak discontinued Ektar 25? IMHO that's the only color fim that approached the creamy skies of digital, and it also resolved close to 200 lp/mm, reputedly the highest-resolution color film ever mass-produced. Yet, today, I still prefer a 1Ds print to an Ektar 25 print. I suspect that's because the old lenses were simply not as sharp as the current lenses.
Hey! I wonder if this is a conspiracy. (Okay! Only joking. I'm not into conspiracy much - but it does exist, of course.)

Before I went digital, I was very impressed with Royal Gold 25 color negative as well as Technical Pan B&W. I've still got a few of both film types in the fridge, now well past their use by date.

Kodak have discontinued both of these films. It seems to me just a little bit odd that in the march towards full digitalisation, Kodak was quick to discontinue the films that could best compete with their DSLR cameras.

Of course, there's no arguing with a statement along the lines, "These films were not popular and therefore not economic to produce".
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 09:54:52 pm
Quote
I thought it had been well established that 9 million Foveon photosites are equal to 6 million Bayer type photosites, ie. the 3 megapixel SD9/10 produces an image quality roughly equal to a D60 or 10D, including resolution.
As RAW converters' Bayer interpolations routines improve, the quality difference is shrinking, and other tools and techniques are closing other aspects of the gap. For example, Focus Magic does a truly excellent job of undoing the slight blurring caused by the AA filter without reintroducing moire or other artifacts. The gap is shrinking, but it will never quite disappear.
Title: Is This The End Game?
Post by: ijrwest on July 21, 2005, 04:12:53 pm
You are on the right track by looking at comparisons with a 'true' image produced by downsampling. That's what I was talking about earlier.

But I don't think using jpeg file sizes is a valid way of comparing the images. You are measuring the amount of information, but you are not measuring whether it is 'true' information.

If you sharpen an image then jpeg doesn't compress it so much and (as you say) you have more information. But the sharpened image isn't necessarily closer to the truth. That's because you may end up making something sharp in the image that was actually not sharp in the real world; and also because you sharpen the noise which makes the image noisier.

Iain West
Title: Is This The End Game?
Post by: BJL on August 30, 2005, 07:37:38 pm
Quote
Quote
As long as dynamic range is not objectionably compromised by doing so
As I understand it, the problem is 'read out' noise.
I am not at all sure, but I believe that you are partly right; total "dark noise" is dominated by read out noise rather than sensor dark current noise, except in fairly long exposure times where dark frame subtraction becomes useful for cancelling part of the dark current noise.

However, it is not true that read-out noise is independent of pixel size: smaller photosites with smaller maximum electron counts seem to have less electrons of read-out noise; I suppose that smaller components with less thermal noise generation can be used in the read-out process. At a guess though, the read noise reduces more slowly than the maximum well capacity, so S/N ratio gets a bit worse with smaller photosites.

Digicam photosites as small as 2.5 microns are far from overwhelmed by read-out noise, so DSLR photosites can be shrunk a lot before being too small to be useful on that count. I expect lens resolution limits will bite first.

Read-out noise does set both upper and lower limits on frame read-out rates, so that extremely high pixel counts could be stranded "between the devil and the deep blue sea". Read too many pixels per second and the noise gets worse, but if read-out takes too long, noise also increases. (Maybe dark current keeps accumulating so long as the "signal" stays in the electron wells of the sensor.)

But there are possibly ways to improve this situation, such as the four channel system used in the Sony sensor of the Nikon D2X, which ensures that at least part of the read-out process only has to go one quarter as fast. Maybe this can be extended to ever more parallel read-out channels.
Title: Is This The End Game?
Post by: Jack Flesher on July 16, 2005, 12:02:58 pm
All I can say is that now I really want one.

Let's see...  An H1, five lenses and the P45...  Can I get $50,000 for my left kidney?

,
Title: Is This The End Game?
Post by: Paulo Bizarro on July 17, 2005, 01:45:32 am
Yesterday I have dropped by the local lab to collect my latest "hang from the wall" framed photo. It is a panorama made up of 6 photos shot with the Powershot Pro1, and stiched with the Canon software. Quite simple.

The final assemblage is 85 cm long and 20 cm wide. It looks great on my wall. I think the end game depends a lot on what you want/require, either comercially (if you make a living out of photography), artistically (if you shoot for fine art), or just for the #### of it.

The end game is the print.
Title: Is This The End Game?
Post by: lepingzha on July 17, 2005, 04:13:08 pm
The differential factors are:

1. Subjects.  In studio digital is fine.  With a lot of low
    contrast foliage in landscapes digital simply breaks
    down by their low-pass filtering and noise reduction
    (especially with Canon).

2. Printing size.  At 13x19 there is little difference to
    95% of viewers (although not me), but at 30x40
    those of digital origin are full of artifacts while those
    from 4x5 film originals can still be examed with a
    magnifying glass for details from the long tile of
    the film's extended MTF curve.

For those who debunk ClarkVision's science, why do
they take a look of the comparison images Clark
dutyfully provided?  Pictures says more than the
math.  One more study with image comparison
can be found at:

http://www.kenrockwell.com/tech/filmdig.htm (http://www.kenrockwell.com/tech/filmdig.htm)

Even in Michael's 1Ds vs. Pentax 67II examples,
where he said the digital has more resolution, anyone
with an unimpired eyesight can see the scanned
chrome provide much more details where the digital
is all artifacts along the high contrast edges of the
buildings and windows (notice the horizontal line at
the 2/3 window height from the bottom completely
missing in the digital captured?)  To me it shows
that the P67 chrome showed at least 3x more
resolution obviously in Michael's very own
comparison.  His arguments that "the 1Ds frame
above appears to have lower resolution because
it is a MUCH bigger enlargement" is simply a way
of stating "the digital file can NOT be enlarged as
big or all you see are the artifacts".

Noise is another matter.  Many $20K audiophile CD
players add noise for more natural sounding.  Someone
added little noise to a D2x file in a DPreview discussion
a while ago to make it much better looking.  Too little
noise in landscape and nature photography creates a
sense of steadiness, unnatural and lack of depth or
dimensionality.  Again it is subject dependent.  I pity
those who never experienced the detailed LP sound
off a good system.  The human ears are not linear
Fourier analysers, so that although the pure 80k tone
can not be heard directly it can contribute to make the
sound (air pressure) wavefrount much sharper, to
enhance the impact from a hit of the drum or bass
string even the harmonic's amplitude is very low.

Leping Zha, Ph.D.
www.lepingzha.com
Title: Is This The End Game?
Post by: Jonathan Wienke on July 18, 2005, 11:28:53 am
Quote
This is a myth perpetrated by the Foveon crowd. A Bayer array does NOT lower resolution a factor 3x, EXCEPT if you are imaging in primary blue or primary red. With any normal, mixed color subject or lighting there is remarkably little loss of resolution.
That matches my experience also; the only time Bayer-pattern sensors aren't perfectly capable of resolving single-pixel detail is when the colors involved are so saturated that you have adjacent pixels clipping to minimum and maximum values simultaneously. This image has plenty of single-pixel sized detail, and is a 100% crop from a portrait shot with the 1Ds:

(http://galleries.visual-vacations.com/images/2005-05-31_0021-fm.jpg)

Look particularly at the pores on the back of her left hand, the threads in her blouse, and her eyebrows and eyelashes. This image has had my normal midtone sharpening (http://www.visual-vacations.com/Photography/SharpeningActions.htm) performed on it, as well as a Focus Magic (http://www.focusmagic.com/) pass at radius 2, strength 75%. The Focus Magic pass does an excellent job of reversing the effects of the anti-aliasing filter and really bringing out the detail without introducing artifacts.
Title: Is This The End Game?
Post by: dwdallam on July 18, 2005, 04:11:32 am
Quote
All I can say is that now I really want one.

Let's see...  An H1, five lenses and the P45...  Can I get $50,000 for my left kidney?

,
I'd rather wait a couple years, save the kidney, and get it for 10K. Prices are dropping fast on high end digitals it looks like. Well, maybe not that high end, so you got me there. But at least for the DSLRs of the professional qualities that were 6K two years ago.
Title: Is This The End Game?
Post by: lepingzha on July 18, 2005, 08:58:42 pm
I quit, since:

1. Nobody sees the artifacts everywhere along the
    edges and over the hairs.  Charles Cramer and
    Bill Atkinson, two of my mentors in digital printing,
    never use sharpening radius over 0.4, while the
    image was doen at least with 1.0.   If everybody
    says this is a good image of course you all have
    your rights to enjoy your good taste and there
    are problems in my eyes which make them to see
    those thin lines in the window in the Michael's
    film scan that the digital completely missed.

2. Remember this is in a landscape discussion group
    not supposed to deal with studio shoots like this,
    which favors digital capture from the start.

3. The Bayer based digital pixel values are NEVER
    true representation of the lights the sensors see.
    They are merely mathematical estimats, or well
    conditioned guesses, depending on the algorithm
    in the raw convertor.

Thanks to all involved,
Leping Zha, Ph.D. in Physics
www.lepingzha.com
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 11:07:19 pm
Quote
All Jonathan's image tells me is that one has enormous post-capture latitude with image processing to make them look as "film-like" or "digital-like" as one wants
Quite so. Now all I ask of technological development in the near future is that they provide me with an affordable desk top scanner with the performance of a $50,000, 10,000 dpi drum scanner, an ICE program that removes scratches from Kodachrome and sliver based B&W negatives without reducing fine detail in any shape or form, and the same thing for grain reduction technology.

Now that's not too much to ask for, is it?  :D
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 03:01:15 am
Quote
I only mentioned that to my eyes the added noise made
the image look more natural.  This has nothing to do
with the general quality of DPreview forum discussions,
to anyone with a sound sense of logic.  At least nothing
was labeled "bull" in the DPreview forum...
If you truly think that one must add noise to an image (or an audio recording) to make it "natural" then your idea of "natural" deviates from the commonly accepted norm. When I'm listening to someone playing a harp, I don't hear white noise, so I see no reason to add it to aan audio recording to make it more "natural". When I'm looking at a landscape scene, I don't see film grain patterns or sensor noise artifacts, therefore adding or celebrating such things in recorded images has nothing to do with "natural". At best it's disguising the presence of one type of artifact by adding a less objectionable form of artifact to cover up the first artifact, but such an act cannot possibly make the result more "natural", it can only make the image deviate even further from the original scene than before.

Quote
However for Velvia it is very different.  What
made me speak was nothing but Michael's stretch that
the P45 will beat scanned 8x10 in all cases.  Again this
is a discussion group of landscape photography, and
over 90% of the film based landscape photographer
shoot chromes not negatives.

Velvia is not magic. And Michael said the P45 "will become a object of desire for any photographer looking for what will likely be almost 8X10" sheet film quality", not that the P45 "will beat scanned 8x10 in all cases". Those are two distinctly different things; you're fabricating a strawman argument here.
Title: Is This The End Game?
Post by: BernardLanguillier on July 19, 2005, 11:29:28 am
How could I not think about it myself...

Cheers,
Bernard
Title: Is This The End Game?
Post by: Ray on July 19, 2005, 09:29:53 pm
I thought it had been well established that 9 million Foveon photosites are equal to 6 million Bayer type photosites, ie. the 3 megapixel SD9/10 produces an image quality roughly equal to a D60 or 10D, including resolution.

Why do we not yet have a 6 megapixel Foveon based camera which should theoretically equal the performance of a Nikon D2X? I can only presume it's because of some inherent design weakness that's too difficult to overcome, namely noise at high ISOs.

The trend is clearly towards more useable high ISOs, with Canon raising the bar with each new model. Having 2 additional sensor layers that some of the light has to pass through must weaken the signal to some degree.
Title: Is This The End Game?
Post by: Mark D Segal on July 20, 2005, 09:43:40 am
For sake of better understanding, I decided to revert to the fundamentals underlying the discussion about Foveon versus Bayer striped array, so I went back to Bruce Fraser's "Real World Camera Raw with Adobe Photoshop CS" and re-read chapters 1 and 2, where all this is discussed in some detail. What I retained from this review are the following propositions (liberally interpreted by me):

(1) Each photosite captures one pixel of colorless information, and the more bits per pixel, the more the editing headroom before information gaps become noticeable.
(2) Pixels only deliver resolution when dimensioned, the more the PPI the smaller the pixels and up to a limit the better the apparent image quality.
(3) Pixels are assigned colors by filtering. Foveon differs from stiped Bayer array by layering the color filtering instead of arraying it.
(4) The main difference (3) makes is that Bayer array data needs to be demosaiced in the RAW converter but the Foveon data does not.

What I deduct from the above four points is that (a) methods of assigning color to pixels (items 3 and 4) should not bear any necessary relationship to the factors determining apparent resolving power attributable to the capacity of the sensor (items 1 and 2), and ( by inference from material on page 5 of the same reference, Foveon may have some advantage producing more accurate color rendition (not more resolution) in those cases where the detail is only captured on one pixel in a Bayer striped array filtered to one of the three primaries, making accurate color rendition difficult. The frequency of these situations would seem to be small based on the observation that color rendition from Foveon chips appears no better, and indeed probably not as good as that from today's non-Foveon CMOS sensors on professional DSLRs.
Title: Is This The End Game?
Post by: Ray on July 21, 2005, 12:47:15 am
This is pixel peeping to the Nth degree. That's fine with me. Just bear in mind that the final results on the print will probably ignore most of the subtleties discussed, and if they don't, our eyes will.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 21, 2005, 02:04:14 pm
Quote
It would appear to my un-informed ears that the lossless compressibility of an image is not by itself a proof of the existence of a gap between data and actual information.
You may find these references useful reading:

http://szabo.best.vwh.net/kolmogorov.html (http://szabo.best.vwh.net/kolmogorov.html)
http://hornacek.coa.edu/dave/Tutorial/notes.pdf (http://hornacek.coa.edu/dave/Tutorial/notes.pdf) (applicable section starts at 1.2.6, LOTS of advanced math)
http://photonsstream.net/ (http://photonsstream.net/)
http://www.google.com/Top...._Theory (http://www.google.com/Top/Science/Math/Applications/Information_Theory/)

If you read up on information theory, you'll discover that not only is this indeed the case, but it is provable to the point of being axiomatic. Whether it is the result of a degree of uniformity or redundancy in the image data (which is always the case in real-world images) or some other factor is irrelevant. The legitimate presence of patterns or uniform areas in an image still reduces the amount of information required to describe the subject with whatever level of accuracy you choose to specify.

Lossless compression by definition has a 1:1 mapping between input and output; for any output, there is only one possible input. All that is happening during lossless compression is that redundancies in the input data are being restated using the fewest bits possible in the output. The ratio between the number of input bits and output bits (given an efficient compression algorithm) can be used to measure the amount of actual information present in the source data.

If you make a photograph of a section of perfectly smooth green wall that is perfectly evenly illuminated with a camera that has a perfectly noise-free sensor, the resulting RAW file will only contain a very small amount of information, something along the lines of "make a rectangle composed of X by Y pixels with this RGB value in this color space". Given efficient lossless compression encoding algorithms optimized for RAW data, no matter what the resolution of the camera, the RAW data could be losslessly compressed to a few kilobytes. No matter how much data you use to describe that small amount of information, the amount of actual information is constant.

Real-world RAW files are not so easily compressible, as there is a lot more information to deal with. Subjects have detail which requires a minimum amount of information to describe with an acceptable margin of error, lighting is not perfectly uniform, and some bits of information are always wasted describing the details of that frame's sensor noise pattern or film grain. But you can still use lossless compressibility to measure the actual information content of the RAW data, as long as you keep in mind that the compression algorithm cannot distinguish sensor noise from true image detail and so some of the information that survives compression is useless noise.

Now for a practical application of all this theory: I decided to use lossless compressibility to attempt to quantify the amount of actual information in a true RGB image as opposed to a Bayer interpolated one. Here's what I did:

1. I shot two photos of a brick wall with my 1Ds on a tripod wearing the 70-200/2.8L IS, one at 70mm, and the other at 200mm, converting both with matching ACR settings.

2. I made matching crops from the center of the image, and downsized the 200mm crop to match the 70mm crop pixel-for-pixel, using Photoshop CS2 Bicubic. Given that there are over 8 source pixels for each destination pixel in the reduced crop, the Bayer interpolation and anti-aliasing filter effects are pretty well canceled out, and comparing these two crops is about as apples-to-apples a comparison as is possible. Here they are:

Straight Bayer 1Ds 70mm crop:
(http://galleries.visual-vacations.com/images/Bayer.jpg)

And the size-reduced, simulated-Foveon 200mm crop:
(http://galleries.visual-vacations.com/images/SimFoveon.jpg)

Except for RAW conversion, and the bicubic resizing of the second crop to size-match the first, no other image edits have been done.

3. I saved both images as losslessly-compressed JPEG200 files. As expected, the simulated-Foveon image is larger; 914KB vs 826KB for the straight Bayer crop. If my methodology is valid, that would indicate that a true RGB sensor such as the Foveon) would capture about 10% more real image information for this particular subject. That seems a bit low looking at the unprocessed files; the uninterpolated RGB image appears to have much more detail than the Bayer interpolated one. Therefore,

4. I applied a Focus Magic sharpening pass to both images to try to make each of them look as good as possible:

Straight Bayer (radius 2, amount 125%):
(http://galleries.visual-vacations.com/images/BayerFM.jpg)

Simulated-Foveon (radius 1, amount 50%):
(http://galleries.visual-vacations.com/images/SimFoveonFM.jpg)

The difference has decreased somewhat, though it is still there. The main difference is a somewhat subtle "artifacty" look to the upper-right quadrant of the straight Bayer crop that looks more natural and detailed in the simulated-Foveon crop.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 21, 2005, 07:51:55 pm
OK, perhaps I misunderstood your original point then. I think perhaps we're on the same page, but just saying things in different terms.
Title: Is This The End Game?
Post by: Big Bird on August 16, 2005, 11:21:57 pm
This is an interesting discussion, here are my thoughts. I think in terms of MP, we are near the "end game". Profitability is determined by selling a product to a lot of customers. The average consumer is definitely there, as the average person hardly ever wants a print bigger than a 4x6, let alone needing a 13x19 upwards.
The average consumer will drive this thing, the so-called "advanced amateur"/Professional, will essentially live with the end result. Market position of the companies at the stabilization point is the goal, I think of the manufacturers at this point.

My 2 cents.
Title: Is This The End Game?
Post by: Ray on August 19, 2005, 01:08:37 am
Quote
As long as dynamic range is not objectionably compromised by doing so
As I understand it, the problem is 'read out' noise. Each value of each pixel has to be 'read'. That process introduces some inevitable degree of noise. The 'read out' noise is not proportional to pixel size. The read out noise of a 1 micron photosite is essentailly the same as the read out noise of a 10 micron photosite. Right?
Title: Is This The End Game?
Post by: Gary Ferguson on July 16, 2005, 12:38:12 pm
Quote
Can I get $50,000 for my left kidney?

If it's a "50 Jahr" limited edition kidney then why not!
Title: Is This The End Game?
Post by: Ray on July 17, 2005, 12:21:13 pm
Quote
“At 50 lp/mm, close to the resolution limit of the 1Ds”
Where do you find this information?
It's difficult to find precise information on such matters, so one has to extrapolate from other more readily available information. Dpreview has some very thorough analyses, but they are based on jpeg images. Norman Koren has a very detailed analysis of his Canon 10D which puts the resolution limit at 54 lp/mm. The 10D sensor is denser than the 1Ds sensor, as well as having lower noise per pixel, so one can be sure the 1Ds is going to have a lower limit than that by a significant margin.

I believe the resolution limit of the 1Ds is around 45 lp/mm, but I'm not certain.
Title: Is This The End Game?
Post by: BernardLanguillier on July 17, 2005, 11:01:06 pm
Let's assume for a second that Hansel Adams is someone whose work can be trusted.

My view of his work is that he devoted 90% of his energy to the control of the exposure, what we would call today DR (I know I am over-simplifying, but you get the idea).

That is IMHO the area of digital imaging where progress could actually result in images that most people would perceive as better.

Very few images are printed at sizes larger than those at which a 1Ds2, D2X or digital backs already excell (without even mentioning the possibility to stitch).

Those 1% of the images will benefit from higher resolutions, while the 99% remaining would only benefit from more DR and better transitions to blown highlights.

This is why I see the sensor of the Fuji S3 as one of the most interesting development of the recent years. It doesn't take much to anticipate the delivery of a high res digital back from Fujifilm based on this technology, although this is pure speculation at this point of time. The high ISO performance of the F10 that I bought about a month ago makes me think that they could push the enveloppe real far this time...

Regards,
Bernard
Title: Is This The End Game?
Post by: Jonathan Wienke on July 18, 2005, 05:06:11 pm
Quote
To my eyes the image is FULL of interpolation artifacts and
gross oversharpening.  It tend to hide the low contrast
details in hairs and foliage, and make them jump out
suddenly when the local contrast passes a threashold.
And where in the image are you referring to? The only artifact I see is just a bit of a halo between the hand and the background which doesn't really show up when printing. Everything else is pretty faithful to the original scene/subject.
Title: Is This The End Game?
Post by: Mark D Segal on July 18, 2005, 10:36:55 pm
Ray, your contribution has just synthesized in my mind the pointless character of alot of the discussion in this thread - I too looked at Jonathan's image and quite frankly it tells me nothing about any technological end game - which by the way I think is also a pointless concept. It has been proven time and again that there is no such thing in so many scientific/technical fields, I am surprised we are even discussing it here. Continuous advances in materials technology and other scientific research always ends-up standing today's perceived limitations and technology forecasts on their heads. And there is no reason to expect it to be different in the field of digital imaging any time soon.

All Jonathan's image tells me is that one has enormous post-capture latitude with image processing to make them look as "film-like" or "digital-like" as one wants - I put those expressions in quotations because as I use them I find they are also nonsensical. My contribution further back about the tired subject of film versus digital was precisely to make this point - with enough skill and effort and beyond a certain threshold of technical quality in the hardware and software with post-capture processing we can just about make anything out of anything to taste and it will be hard for all but the best honed, most experienced eyes to tell what it started from - within say an A3 size constraint for non-MF work.

Yes, sensor type X will be better and worse than sensor type Y in different respects simultaneously - but given the limitations of what our printers can replicate, I would think it useful to see this discussion focus on real world differences in printed results eminating from all these learned observations about the laws of physics and MTF charts. That might constrain the meaningful content of the discussion quite a bit!
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 09:08:40 pm
I see what leping is talking about. The image has a slightly brittle, overly bright and contrasty appearance as opposed to the more mellow and natural, smooth gradations one would expect with MF film. The image 'jumps out' at you in a startling fashion. Nevertheless it's very appealing.

In any case, if Jonathan has deliberately manipulated the image in this way in an attempt to highlight 'pixel' definition, then that's to be expected, and as always, one would expect the final result on the print to be slightly different because print is a refective medium and one's monitor is transmissive.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 11:46:21 am
Quote
Pure information theory dictates the fact that a Foveon sensor with 3 million pixels has 9 million photosites, which is 3 times more than a 3 million pixel sensor using bayer interpolation. It will therefore deliver 3 times more information.

I am fully aware that the resolution will not be 3 times higher, since resolution is very much influenced by the tone which is captured in each an every of the 3 million pixels of the Bayer sensor, just like it is by the Foveon sensor. But I didn't write "resolution", I wrote "information", which is the same as "data" in my mind.
That's where the quibbling starts to crop up; information and data are not quite the same thing. Data can contain information, but if there is less information than data, tha data can be compressed down to approximately the size of the actual information it contains. For example, Canon 1Ds RAW files pack about 11MB of information into about 16.5MB of data, which means that lossless compression can reduce the data size to the match the amount of actual information, approximately 11MB. Similarly, a 1000x1000 pixel TIFF image that is pure white (RGB 255,255,255) can be compressed very small because the only real information it contains is "make a 1000x1000 pixel white square" even though the uncompressed file contains 3 million bytes (assuming 8-bit) of image data. On the other hand, a 1000x1000 pixel landscape image with lots of foliage and wispy clouds and a stone wall contains much more real information even though the amount of data is identical, and therefore it cannot be losslessly compressed nearly as much as the white square.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 11:49:03 pm
How about a practical one?

Here's the 100% crop of the portrait I posted recently, but straight from the RAW converter, no USM or anything else:

(http://galleries.visual-vacations.com/images/2005-05-31_0021-raw.jpg)

At this point the image is rather soft and mushy-looking, and lacks contrast and "snap". After applying several rounds of USM at varying radius and amount settings to sharpen and add local contrast, it looks a lot crisper, but still has a certain "softness" to the edges due to the AA filter:

(http://galleries.visual-vacations.com/images/2005-05-31_0021-usm.jpg)

At this point the image is in the condition in which most people make the "3MP Foveon = 6MP Bayer" comparison, rightly pointing out that the Foveon image has better edge "crispness" and more easily distinguished single-pixel detail than the Bayer image. But as the next image shows, this conclusion is a bit premature.

(http://galleries.visual-vacations.com/images/2005-05-31_0021-fm.jpg)

A pass with Focus Magic deconvolutes the AA filter blur, and now the pixel-level clarity of the Bayer sensor image is much closer to that of the Foveon. The Foveon sensor still enjoys  some advantages, such as being relatively immune from moire and some color interpolation artifacts (especially in highly saturated colors) meaning it can still deliver slightly more accurate colors in fine details, but it's no longer a night and day difference, and I'd be willing to bet that most pixel peepers wouldn't be able to guess the source sensor type much better than 50% in a blind comparison test.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 20, 2005, 12:17:33 pm
Quote
Regarding Foveon, where did you get these 120 to 200% values from if I may ask? Aren't those figures experimental data that were impacted by the noise resulting of the Foveon implementation of the multi-layer sensor idea? I still don't see any theoretical justification for those.
I'm basing that on my comparisons of 100% crops of SD9 and SD10 images to 100% crops from my 1Ds and 1D-MkII. I'm only putting those figures out as a rough estimate, not as something scientifically precise. I'm not sure that anyone has devised a way to objectively measure image quality that goes beyond simple S/N  and resolution measurements, and takes into account color accuracy, the visual acceptability of whatever image artifacts may be present, and other such issues. Given that, there is currently no way to precisely quantify the image quality differences betwen Bayer and Foveon.

Quote
Your example would probably be closer to the reality if water and oil were mixed to create a suspension. Removing a fixed amount of liquid on the top would affect less oil than if no water had been added, but it will still affect some.

No, my analogy is accurate as-is. When you manipulate an image, the rounding errors and entropic losses are introduced in the least significant bits, and gradually work their way into the more significant bits as one performs more edits to the image data. The whole point of 16-bit editing is to keep the rounding errors and other entropic reductions in the bits that are made-up anyway, so losing some of them does not compromise the actual information.

Another way of looking at it: if you edit a 16-bit image to the point that only every eighth level is populated, you have invalidated or lost the least significant 3 bits of image data (2^3 = 8). If you continue editing until only every 32nd level is populated, you have now invalidated or lost the least significant 5 bits (2^5 = 32). Since the true image information is contained in the most significant bits of the image data, you have to lose/invalidate approximately 7 bits worth of image data (toothcombing the histogram so that only every 128th level is populated) before you start corrupting or losing any of the real image information. That's pretty tough to do; a sensible workflow (convert RAW, adjust levels/curves, moderate color tweaks, and sharpen) is only going to introduce 1-3 bits of entropy losses (maximum toothcombing of the histogram to every eighth level or so), which still leaves you at least 3-4 bits worth of buffer between the real image information and the entropic garbage.

Digital audio editing works the same way; it is common to record with either a 16 or 24-bit DAC, pad the data with zeroes to make it 32-bit, edit, and then downsample to 16 bits for the final output. If done correctly, the error of greatest magnitude in the final output data will arise from rounding to the nearest 16-bit value. All of the entropic losses and rounding errors introduced during editing are buried in the least significant 4-8 bits of the data bits that are thrown away anyway. The rounding error inherent to downsampling to 16-bit audio is far greater in magnitude, but is still acceptable because the 16-bit audio format is good enough, and is still the best it can possibly be.
Title: Is This The End Game?
Post by: Mark D Segal on July 19, 2005, 10:02:29 am
Bernard,

Unlike some contributors to this website who are either real or self-imagined experts on the INPUT side of these issues, I am neither, but I do trust my vision and my common sense, so I tend to focus on RESULTS. Looked at from that perspective, whatever the mathematical merits of the Foveon sensor, any test results I have seen really don't give it any "gotta have it" advantages over the more traditional CMOS sensors now being used by Canon and Nikon in their professional cameras. The theoretical basis of this technology sounds promising - getting three for one because of the layering, but its operational superiority remains to be demonstrated.
Title: Is This The End Game?
Post by: etmpasadena on July 21, 2005, 11:44:15 am
Jcarlin sums it up nicely.

Now for Mark, if you want to see how the demosaic process can affect image quality and resolution simply get your hands on a Kodak SLRn/c file (preferably a landscape with lots of fine detail) and decode it in both Kodak's Photodesk and in ACR. You'll find there's quite a bit of difference based on the two different ways those programs decode the files.

Granted, my example probably only applies to the Kodak cameras. But it does show nicely how the demosaic process can effect perceived resolution.
Title: Is This The End Game?
Post by: budjames on July 22, 2005, 03:30:27 am
Perhaps the "end game" is not here yet if you consider that my Canon 1Ds Mk2 with 100-400mm IS lens and a camera bag with 24-70 L and 16-35 L lenses, a flash unit and tele extender plus misc accessories is a pain to carry around all day.

If the weight of carrying all of this fine technology and optics could be cut in half, that would be real progress!

I think the next technical challenge is making the stuff smaller and lighter without compromising speed or resolution.

My 2 cents.

Bud James
Title: Is This The End Game?
Post by: jani on August 09, 2005, 05:15:17 am
Quote
Where will sensors top out?  They may not, at least anytime near term.  Our society's consumption of technolgy will have to say Enough!, if it's to top out.  I'd think that won't happen anytime soon.
Well, you have a good chance of being wrong on this one count.

Sensors are physically constricted by the limitations of the wavelengths of visible light. So unless you plan to capture images in the far, far ultraviolet, the ease of capturing photons reliably will be a limiting factor. The consumers can't change that.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 16, 2005, 12:48:27 pm
Quote
Can I get $50,000 for my left kidney?
For that much, I'd sell you my ex-wife!!!
Title: Is This The End Game?
Post by: ddolde on July 17, 2005, 11:36:33 am
Quote
which states Michael's projection that the P45 would provide
scanned 8x10 film quality highly improbable.
Ditto that.  I think the P25 will NEARLY match drum scanned 4x5, ie. close enough.
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 12:24:36 am
Quote
Clark's figures are seriously inflated.  He claims you need 31 mp just to match 645.  That's hooey.

My Kodak 16mp back easily outperforms 6x6 scanned film. Not just in resolution but in dynamic range, ease of use, and general image quality.
I don't believe it's a case of inflated figures but different goals. Most people do not take photos with the object of achieving the greatest resolution. There are other more important consideration such as content, emotional impact, color accuracy, ease of use and handling of the equipment etc etc.

So your statement, Doug, can be quite valid for your purposes and Clark's statement could also be true for his purpose of comparing ultimate resolution limits.

It's a well known fact that, with a good prime lens, a sturdy tripod, flawless technique and the right type of film developed in the appropriate developer to bring out the greatest detail, it's possible to record detail up to 100 lp/mm on 35mm film.

How does this compare with the 45-55 lp/mm limit of the 1Ds?

However, having achieved such high resolution on the film, there would be the next problem of getting it scanned. The average 4000 dpi scanner just wouldn't do, even if it were a drum scanner. My guess is you'd need the best drum scanner available and the film would need to be scanned at 8000 or 10,000 dpi.

Outside of the laboratory it's difficult to imagine any need for such rigorous procedures. I suppose if I were spying on a military installation from a distant hideaway, trying to photo stationary vehicle number plates with my one tripod-mounted telephoto lens and I had a choice of a 1Ds digital body or an EOS-1V film body loaded with Kodak Technical Pan, then it would make sense to use the 1V. I'd get the film developed appropriately and extract the number plate information with a microscope. The 1Ds would be much more convenient to use, but just wouldn't have the resolving power for those distant number plates  :D .
Title: Is This The End Game?
Post by: lepingzha on July 18, 2005, 02:20:55 pm
Quote
Look particularly at the pores on the back of her left hand, the threads in her blouse, and her eyebrows and eyelashes. This image has had my normal midtone sharpening (http://www.visual-vacations.com/Photography/SharpeningActions.htm) performed on it, as well as a Focus Magic (http://www.focusmagic.com/) pass at radius 2, strength 75%. The Focus Magic pass does an excellent job of reversing the effects of the anti-aliasing filter and really bringing out the detail without introducing artifacts.
To my eyes the image is FULL of interpolation artifacts and
gross oversharpening.  It tend to hide the low contrast
details in hairs and foliage, and make them jump out
suddenly when the local contrast passes a threashold.

The ultimate test for resolution is large prints.  Maybe
the client's taste is changing and they just don't
see all the artifacts and oversharpening that is so
common on those raved prints from digital captures.

Leping Zha, Ph.D.
www.lepingzha.com
Title: Is This The End Game?
Post by: BernardLanguillier on July 18, 2005, 05:43:57 pm
Hi there,

Resolution in a Bayer sensor is perhaps not 3 times lower, but the information captured is without any possible doubt 3 times less.

This lack of information is probably preceived in some subtle way just, sometimes not.

Regards,
Bernard
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 09:39:32 pm
Quote
Comparing results, the 1Ds will still outperform the most carefully executed film scan using slow color film that has excellent shadow detail and a Minolta Dimage 5400 PPI scanner. I know this is an old story by now, but from my personal, immediate - NOW for NOW experience - that's just how it is.
No doubt about it, but even the Minolta Dimage at 5400 dpi will not extract all the detail from the finest grained, highest MTF, sharpest B&W film, such as T-Max 100 and Technical Pan.

Recently doing a net search for comparison details on scanners, I got the impression that a Nikon 4000 dpi scanner would not capture anything beyond 60 lp/mm (but I think this figure was in relation to color film). The Minolta was slightly better. In fact I saw a figure as high as 74 lp/mm from B&W film. Not quite good enough, however, to capture the 100+ lp/mm that could be there on the film.

Unfortunately, as someone has already mentioned, we are rarely able to appreciate the full potential of film because, to get the data digitised, it needs to be photographed a second time (scanned).

There would seem to be little point to be obsessive about technique, to restrict oneself to using just a few types of high res films, to go to great expensive getting a 10,000 dpi drum scan etc etc, when one could just buy a digital camera  :D .
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 10:30:20 pm
Quote
So one would need to apply a bit of blurring to the image and decrease the contrast to make it more film like?
I don't think one would need to apply blurring. Jonathan has already stated that he applied mid-tone sharpening. Maybe less mid-tone sharpening would help. As regards contrast, I see an unnatural lack of detail in some of the highlights of the white grid pattern on the girl's dress. It's like a dress that has its own illumination. But as I said, I accept the fact that Jonathan has just exaggerated an effect to make a point.

In any case, I'm looking at this (a jpeg image) on a medium priced LCD monitor. I'm just talking about general impressions.
Title: Is This The End Game?
Post by: BernardLanguillier on July 19, 2005, 10:07:49 am
Mark,

I don't disagree with you at all. I wrote the very same thing above "To what extend this will be visible remains to be proven".

I was just trying to clarify a misunderstanding based on a confusion between the words "information" and "resolution". Probably pointless, but hey... I don't have much to do in my hotel room tonight.

Regards,
Bernard

p.s.: for the sake of clarification, I don't consider myself an expert on imaging theory neither, but my comments never were about image theory in the first place.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 19, 2005, 11:30:43 am
Quote
That word "natural".  

Is it being used in relation to what one sees in the real world?  

Or is "natural" what one is used to seeing when using a more familiar technology (film prints)?
I'm using it to describe what one sees or hears in the real world without any technological intermediaries such as cameras or speakers. I suspect that Dr. Leping tends toward the other usage.
Title: Is This The End Game?
Post by: BernardLanguillier on July 19, 2005, 11:02:25 pm
Quote
No, it is relevant; Foveon sensors generate 3x the data of a Bayer sensor with the same pixel count, but not 3x the actual image information. So the gap between data and information is wider with a Foveon sensor than a Bayer sensor; a Foveon sensor outputs 300% of the data of a Bayer sensor, but only 130% (approximately) of the actual image information. That is why a Foveon sensor will output more detailed images than a Bayer sensor with the same pixel count, but they're not three times better than the Bayer image. As processing and interpolation techniques improve this gap will narrow, but never quite close.
Jonathan,

I would love to get pointers from you showing theoretical demonstration of your claims.

Regards,
Bernard
Title: Is This The End Game?
Post by: jcarlin on July 21, 2005, 10:41:54 am
MarkDS,
   Here is the quick low down, the de-mossaicing algorithms make the assumption that color is not likely to change very quickly, and to the degree that it does change it will look like a smooth transition from one color pixel to the next.  Lets look at an example of three values representing GRG and guess at what the green value would be for the R pixel.

120-60-110

Well we could guess that the R pixel has green value of 115, we might be wrong, but chances are that we're not far off.  The algorithms that are used in RAW converters and digital cameras are more complicated, and beyond the scope of this thread.  Here is a link were you can learn more

http://www-ise.stanford.edu/~tingchen/ (http://www-ise.stanford.edu/~tingchen/)

The effect that this will have on sharpness, or apparent sharpness is that it will make dramatic edge transitions less dramatic as well as smoothing over micro detail.  In most cases the detail loss is minor, and the appent loss of detail (what you and I can see) is trivial, but no doubt somebody somewhere will say they can see the difference.  If you want to try and see the difference take a look at

http://www.dpreview.com/reviews/sigmasd10/page15.asp (http://www.dpreview.com/reviews/sigmasd10/page15.asp)

and the subsequent pages.  In some places you can see an improvement, but in other places the foveon sensor looks worse.  Not to different to the way that any two cameras perform under review.

John
Title: Is This The End Game?
Post by: Jonathan Wienke on July 22, 2005, 02:34:05 am
Quote
The only part that of which I am not 100% convinced yet, but I am not saying that you are wrong, just that you didn't convince me yet () is the part where you are seemingly saying that the first 2 editions of a 16 bit RAW converted file (resulting, for the sake of discussion, in the loss of 2 bits each) will only damage artificially created data that was not present as information in the 12 bits RAW file in the first place.
It's a combination of logic and information theory. Information theory proves that your average Bayer RAW file has no more than 8 bits per pixel of non-redundant (image information + sensor noise information). This is a necessary corollary of the observable lossless compressibility of RAW files. The articles I cited early go into the math behind that deeper than either of us will probably care to go, especially the PDF.

So we have 8 bits or less per pixel of actual image information going into the RAW conversion process, which is hiding in 12 bits of RAW data. That data goes into the RAW converter, which expands the 12-bit RAW to 48-bit RGB. We've agreed that no new information is created by this process, and we've also agreed that editing operations destroy low-order, least significant bits first. So the only remaining issue is where those "real" bits are hiding in the output of the RAW converter.

In the vast majority of cases, RAW conversion is done in such a manner that the range of output values are fairly well-distributed between minimum and maximum. You are correct in asserting that the "real" bits are not simply the highest-order data bits like you would get with simple zero-bit padding, but are cleverly spread around where they will do the most good. But that doesn't mean you'll ever find them in the lowest-order bits or the RGB data, as you would have to do a really extreme levels adjustment to get them anywhere close. Let's say you did a levels adjustment where the output scaled from 0 to 7 in Photoshop's dialog. That would vacate the 5 highest-order bits of any real image information and replace those bits with zeroes, moving the image data to less-significant bits. Instead of real image data living somewhere in bits 9-16 of a given color channel, now it's been relocated to bits 4-11, and bits 12-16 have been filled with zeroes. At this point, we still have 3 low-order bits left to bear the brunt of the entropic losses. But that is an absurdly extreme example; I've never done that, and if I did, I wouldn't care if a few low-order real bits got munched, because those pixels have already been relegated to extreme shadows, and you're going to need a really good printer and custom profile combination to get any detail other than featureless black out of that anyway.

But in any normal scenario where the brightest highlights are (8-bit scaled) level 128 or greater, then the most significant bits of real image information must be somewhere in the most significant bits of the RGB color channels. It is not possible to "split" the bits of real information into noncontiguous groups in a single color channel; they come off the sensor together, and remain that way in the RGB data. If 8 bits of real image information starts at bit 16 in the RGB data, it can't go to bit 12 and stop, be interspersed with some guesswork bits, and then pick up again at  bit 6 and continue down to bit 3. We've already established that a Bayer RAW doesn't contain enough information to precisely define all 48 RGB bits, therefore if the highlights are greater than (8-bit scaled) 127, then the real image information has to be contained somewhere in the topmost 8 bits of the RGB color channels. That means that the lowest-order bits have to be some combination of guesswork and garbage.
Title: Is This The End Game?
Post by: Tibor22 on August 09, 2005, 02:59:04 pm
I would imagine that sensors can at least double the number of megapixels from what we have today (35mmF from ~16mp to >30mp and MF from ~40mp to >80mp) before the laws of physics step in and end it.

I worked for a mini-computer company in the 1980's and their 1 mb memory board was 17" square! and contained approx 100 IC's. Now we have 1gb SODIMM's.

I realize that this is apples vs oranges. But you just can't underestimate future (next 2 - 5 Yrs) technology.
Title: Is This The End Game?
Post by: ijrwest on July 20, 2005, 05:21:27 pm
I think Bernard has a good point about image manipulation causing a loss of 'real' information. It's not about rounding errors in floating point arithmetic. Suppose you take a picture of a test chart using a 12mp camera ( say a 1Ds ) and a 3mp camera ( say a D30 ). Now downsample the 1Ds image 2:1 to make a 3mp image. Then subtract this from the D30 image. What we have is then an 'error image' for the D30. You could measure this magnitude of the error as the standard deviation of the difference over all the pixels.

Now apply your image manipulations to the D30 image to make it 'better' and check the error again. I think Bernard is saying the error gets bigger, even though the image might look better.

Iain West
Title: Is This The End Game?
Post by: billh on July 16, 2005, 08:59:01 pm
“but my guess is that the P45 will push even the superb Zeiss lenses on a Contax to their limits.”

Michael, How do these lenses compare to the Leica lenses (and for that matter, to my Zeiss lenses for the Rollei 6008)? This digital business fascinates me, but it doesn’t seem as simple to comprehend as in the old days where an image quality boost from a new higher resolution film or lens was readily understandable. A friend came over to shoot some lens tests because he is interested in both a new Leica lens I have, and a 1Ds2. We used a variety of Canon lenses (all primes) and Leica lenses on the 1Ds2, and it appears the sensor is the limiting factor here (at least in the series we did), with a resolution of 50+ line pairs (definitely less than 60 LP). The 1Ds2 resolution was a good bit higher than my 1D2 resolution. The film resolves higher still, but a print of the photographed resolution charts is far sharper from the digital camera. I know you see the 22MP medium format quality racheted up a notch over the 1Ds2 - this is the same, or similar to the old days with 35mm and medium format, and not because the 22MP Phase One sensor has higher resolution than the 1Ds2 sensor - or is it both larger and higher resolving? I assume, as the sensor size from a company remains the same, but the MP count increases, that resolution increases too? I’ll bet you a buck you end up with the new 39MP wonder! It is just as hard to resist as better lenses were in the old days. It is also a tad more costly than trying a newly issued film....
Title: Is This The End Game?
Post by: billh on July 17, 2005, 10:18:54 am
http://www.luminous-landscape.com/reviews/shootout.shtml (http://www.luminous-landscape.com/reviews/shootout.shtml)

That is exactly what I see from the digital cameras, but I’m curious about the digital resolution - where it may go (idle curiosity) and how MP relates to resolution. When we photographed the resolution charts, we saw 50+ lp from the 1Ds2, and 25 - maybe up to 30 lp from the 8MP 1D2. However I read reports saying the 6.1 MP sensor in the Epson RD-1 shows 40+ LP resolution (I hate paying $3000 for a 6MP camera which by all reports is prone to a variety of problems, but I would love to have a small camera I could carry and use my M lenses on). In the past you could use a fine grained film with x resolution, and try different developers and development methods to achieve an intensification of the light-to-dark transition lines (edge sharpness). What now - simply employ more MP, or does pixel size, etc. come into play when we talk about resolution and ultimate image quality? Is the smaller 12MP Nikon sensor capable of higher resolution than the 16.6 MP 1Ds2?

He has some interesting comments about digital resolution:

Part 1:
http://www.imx.nl/photosite/japan/epsonrd1/epsonrd1.html (http://www.imx.nl/photosite/japan/epsonrd1/epsonrd1.html)

Part 2:
http://www.imx.nl/photosite/japan/epsonrd1/epsonrd1B.html (http://www.imx.nl/photosite/japan/epsonrd1/epsonrd1B.html)

http://www.imx.nl/photosite/comments/c007.html (http://www.imx.nl/photosite/comments/c007.html)

“At 50 lp/mm, close to the resolution limit of the 1Ds”
Where do you find this information? I don’t have my 1Ds anymore, but using the Canon 35 f1.4, 85 f1.2, 135 f2.0, 180 f3.5 Macro, 300 f2.8IS, and Leica (via adapters) 50 f1.4 and 180 f2.0 (my favorite) lens, I see 50, or maybe a touch more line pair per mm. The 8 MP 1D2 was a lot lower, and I remember when I first began using it, I missed the resolution from my 1Ds. What is the resolution of the 1Ds2? And, if 50+ LP is the max resolution figure, are the current lenses not only good enough, but does this mean that any lens capable of, say 60 lp resolution, going to work equally well (subtleties aside) on the 1Ds2?

What is the resolution of the Canon 16.6 MP sensor, and the Phase One 22 and 39 MP sensors?
Title: Is This The End Game?
Post by: ddolde on July 17, 2005, 06:00:35 pm
Clark's figures are seriously inflated.  He claims you need 31 mp just to match 645.  That's hooey.

My Kodak 16mp back easily outperforms 6x6 scanned film. Not just in resolution but in dynamic range, ease of use, and general image quality.

And pleeeze....Ken Rockwell reviews cameras he has never even held.  He is not to be taken seriously.  (Sorry Ken)
Title: Is This The End Game?
Post by: jani on July 18, 2005, 08:42:29 am
Quote
Each grain of silver in a fine grain film is 1 to 2 microns in size, while a digital sensor may be 5-8 microns. One would assume from this that film can outresolve digital.

But, any individual grain can either be on or off, black or white. It takes 30-40 grains in a random clump to properly reproduce a normal tonal range. On the other hand each individual pixel can record a full tonal range by itself.

This reminds me of another similar argument about the resolution of PPI on-screen versus (raster-based) DPI on paper.

People who didn't think things through, claimed that there were no monitors capable of matching the color visuals of a 600 DPI printing press. Yet the printing press in question was a rasterized CMYK process, and the screen in question was the IBM 22" flat panel with over 200 PPI and 16.7M tonal range per pixel.

It was a stupid argument to begin with; the processes are so fundamentally different. And the same goes for film vs. digital.

Quote
It always amazes me when people defend theoretical positions which are clearly contradicted by reality. Working photographers with experienced eyes know what they are seeing, and so do their hyper-critical clients who are paying the bills. When somone tells them that the evidence of their eyes is wrong, all one can do is smile and shake ones head. The sad part though is when people who don't have the direct personal experience to contradict the theorecticians are intimidated into believing them.
My take on this is that those theoretical positions are - to put it in a all-too polite way - incomplete, and therefore irrelevant even for theoretical use.

Those "theoreticians" aren't worth their salt, and shouldn't pose as experts on a subject matter they clearly haven't studied well enough.

This goes for me, too, when I'm out of line. :cool:
Title: Is This The End Game?
Post by: BobMcCarthy on July 18, 2005, 03:26:46 pm
It's a moot point when the A to D conversion (scanning) is such a weak link , when the popular method of printmaking today is an inkjet printer (D to A device).

Of course the opposite is true when projecting transparencies vs digital projection.

FWIW,

Bob
Title: Is This The End Game?
Post by: Mark D Segal on July 18, 2005, 06:51:28 pm
Bernard,

Interesting that you mention the Fuji S3; I was at a magazine shop on the way back from the gym this afternoon and was browsing a French digital imaging magazine (forget the name) where they reported test results they had conducted on the Canon 1Ds-2, the Nikon 1Ds and the Fuji S3; in all three of the sets of test images they did for color rendition, sharpness and luminosity the Fuji S3 came out on top. For two of the tests the differences were visible in the magazine images. They used the same Sigma lens on all three cameras. They thought the camera body was lacking in some respects - I didn't read all the detail - but their comments on the quality of the sensor are indeed interesting. It is, however, only 6 MP, so regardless of the quality it would put some constraint on image size, especially with cropping. But the technological direction indeed looks promising from what that review article was saying and showing.
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 10:13:26 pm
Quote
Resolution in a Bayer sensor is perhaps not 3 times lower, but the information captured is without any possible doubt 3 times less.
Bernard,
I think the statement might have been true if you'd used the word 'data' instead of information. A 3 megapixel Foveon sensor has 9 million collection points. A 3 megapixel Bayer type array has 3 million collection points. The interpolative algorithms of the Bayer sensor go some way to reducing the gap so that a Foveon type sensor with 3x the photodetectors has approximately 1.4x the resolution, rather than the expected 1.7x. (Oops! ie. square root of 3x).
Title: Is This The End Game?
Post by: samirkharusi on July 19, 2005, 01:32:36 am
I recently made a few prints between 3 ft wide and 6ft wide from a 1Ds. Laughable that I would have even contemplated doing that from film. And where were all you res buffs when Kodak discontinued Ektar 25? IMHO that's the only color fim that approached the creamy skies of digital, and it also resolved close to 200 lp/mm, reputedly the highest-resolution color film ever mass-produced. Yet, today, I still prefer a 1Ds print to an Ektar 25 print. I suspect that's because the old lenses were simply not as sharp as the current lenses. We always seem to come back to lenses, at this point in time. Today's common lenses are probably very well served by 20 megapixel sensors. Nevertheless it is possible, today, to make diffraction-limited f8 lenses, at least in the longer focal lengths. I used to use one in my research work way back in 1970. Such a lens would justify a 35mm-format array having between 100 and 200 megapixels. I really do not see much of an end-game any time soon, the initial topic of this thread. Such a sensor+lens would be able to produce prints, very likely, superior to today's 4x5 film cameras. Will it stop there? Not IMHO. Because soon thereafter somebody will come up with lenses that are diffraction-limited at f5.6... We also seem to prefer larger and larger prints as the years pass, because technology makes them affordable. So the pros will keep on striving for ever larger prints from smaller and smaller cameras. I bet Captain Kirk can take a gigapixel+ image with a handheld camera  :p
Title: Is This The End Game?
Post by: Bobtrips on July 19, 2005, 09:52:14 am
That word "natural".  

Is it being used in relation to what one sees in the real world?  

Or is "natural" what one is used to seeing when using a more familiar technology (film prints)?
Title: Is This The End Game?
Post by: BernardLanguillier on July 19, 2005, 07:10:47 pm
Quote
That's where the quibbling starts to crop up; information and data are not quite the same thing. Data can contain information, but if there is less information than data, tha data can be compressed down to approximately the size of the actual information it contains.
Jonathan,

You are correct, but this distinction isn't really relevant to the discussion at hand, is it? The same gap between data and information will theoretically be present on Foveon and Bayer sensors, right?

Cheers,
Bernard
Title: Is This The End Game?
Post by: BernardLanguillier on July 20, 2005, 09:47:00 pm
Quote
No, my analogy is accurate as-is. When you manipulate an image, the rounding errors and entropic losses are introduced in the least significant bits, and gradually work their way into the more significant bits as one performs more edits to the image data. The whole point of 16-bit editing is to keep the rounding errors and other entropic reductions in the bits that are made-up anyway, so losing some of them does not compromise the actual information.

Another way of looking at it: if you edit a 16-bit image to the point that only every eighth level is populated, you have invalidated or lost the least significant 3 bits of image data (2^3 = 8). If you continue editing until only every 32nd level is populated, you have now invalidated or lost the least significant 5 bits (2^5 = 32). Since the true image information is contained in the most significant bits of the image data, you have to lose/invalidate approximately 7 bits worth of image data (toothcombing the histogram so that only every 128th level is populated) before you start corrupting or losing any of the real image information. That's pretty tough to do; a sensible workflow (convert RAW, adjust levels/curves, moderate color tweaks, and sharpen) is only going to introduce 1-3 bits of entropy losses (maximum toothcombing of the histogram to every eighth level or so), which still leaves you at least 3-4 bits worth of buffer between the real image information and the entropic garbage.

Digital audio editing works the same way; it is common to record with either a 16 or 24-bit DAC, pad the data with zeroes to make it 32-bit, edit, and then downsample to 16 bits for the final output. If done correctly, the error of greatest magnitude in the final output data will arise from rounding to the nearest 16-bit value. All of the entropic losses and rounding errors introduced during editing are buried in the least significant 4-8 bits of the data bits that are thrown away anyway. The rounding error inherent to downsampling to 16-bit audio is far greater in magnitude, but is still acceptable because the 16-bit audio format is good enough, and is still the best it can possibly be.
Jonathan,

Interesting discussion, thank you for the feedback.

Just a confirmation. I assume that you call "real image information" the most significant 8 bits based on the assumption that both display and print are 8 bits devices?

Besides, one question. When a 12 bits RAW image is converted into a 16 bits tiff, my understanding is that a mapping was performed so that the max value in 12 bits (11111111 1111) becomes the max value in 16 bits (11111111 11111111). One could think that this would leave 4 bits of un-used values throughout the range, but my understanding is that this is mostly not the case since:

- the demozaiquing is basically an averaging process whose ouput does benefit from the additional set of values existing in 16 bits compared to the 12 bits input,
- the gamma application,
- ...

-> the result of the RAW conversion is probably most of the time a fully populated 16 bits file, not just a file whose 12 bits are populated, and then zeros added.

Do you agree with this?

Although I agree with you that the least significant bits will be affected first, those do contain useful image information if my understanding of the 12 -> 16 bits mapping is correct. I agree with you that the impact of these least significant bits is by definition very small, but IMHO actual image information will be lost even when working in a 16 bits mapping of an image generated by a 12 bits sensor.

Cheers,
Bernard
Title: Is This The End Game?
Post by: BernardLanguillier on July 21, 2005, 10:32:14 am
Quote
I sense that concepts of bit depth, image compression and demosaic algorythms are being comingled - I can see the relevance of bit depth and data compression in a discussion about apparent resolution (i.e. printed image detail), but with today's advanced algorythms for demosaic-ing color data it is less clear to me how this impacts on apparent resolution.
Mark,

It would appear to my un-informed ears that the lossless compressibility of an image is not by itself a proof of the existence of a gap between data and actual information. It can result from the legitimate presence of patterns or uniform areas in an image.

The compressibility is therefore also not enough of a proof that the demosaicing algorythms are bad (or good) at extracting real colors and at delivering an image that has a resolution close to the theoretical limit achievable by a perfect sensor.

To summarize, compressibility can be related, but by itself contains no indication in one sense or another.

I am sure that Jonathan and John will provide their view on this too.

Regards,
Bernard
Title: Is This The End Game?
Post by: Jonathan Wienke on July 22, 2005, 10:40:04 am
That approach makes sense; a 60MP Bayer sensor would certainly be far enough beyond the capability of any current 35mm-format lens to eliminate the need for an AA filter. After doing color interpolation, lens corrections, and noise reduction, downsampling to 50% of the original pixel dimensions would leave one with an extremely high-quality 15MP image relatively free of digital processing artifacts.
Title: Is This The End Game?
Post by: Jonathan Wienke on August 18, 2005, 09:36:17 pm
Quote
To Jonathan in particular,
   thanks for a lot of useful contributions in this thread, whose end seems not to be in sight.
You're welcome.

Quote
a) I am glad that Jonathan might be joining my club advocating "pushing pixels smaller than lenses can resolve to eliminate aliasing ("oversampling"), then using downsampling or more selective noise reduction processing as needed to control visible print noise."

As long as dynamic range is not objectionably compromised by doing so, and the chip can be read out fast enough that the extra photosites don't objectionably slow down the frame rate, I'm in favor of that as a strategy. I think we're a few generations of sensors and supporting electronics (DIGIC V, anyone?) away from that being a viable approach, but will go out on a limb and predict that such a state of affairs is less than 10 years away. Contemplate, if you would, a 200MP 4x5 full-frame back...
Title: Is This The End Game?
Post by: jcarlin on July 26, 2005, 05:28:08 pm
Quote
Quote
That means that the lowest-order bits have to be some combination of guesswork and garbage.

Would that be the reason why the underexposed areas suffer from noise and lack of detail and resolution?
In short, yes
Title: Is This The End Game?
Post by: Ben Rubinstein on July 16, 2005, 09:32:07 pm
Forget anything else, why doesn't someone come through with a decent solution to expanded DR in the highlights? As MR has pointed out, it's not necessarily about the pixel count any more, even at the high end.
Title: Is This The End Game?
Post by: Ray on July 17, 2005, 08:46:27 am
Quote
Good science can accurately predict the results of real-world applications of theories and premises. Clark's science (or at least his math) is severely flawed because real-world comparisons between digital and film deviate dramatically from Clark's claims and predictions. He's not an example of a credible source of information or good scientific analysis..
I never realised that Clark had such little credibility amongst you professionals. I wonder why he goes to so much trouble providing sample images which clearly back up his claims. Does he have shares in film manufacturing companies? Is he just on an ego trip trying to be controversial, or is he just incompetent?

There's a tutorial on Michael's site here (http://www.luminous-landscape.com/tutorials/dq.shtml) by Miles Hecker and Norman Koren. Do they also lack credibility?

You will notice from this tutorial that Fuji Velvia, despite its reputation as a sharp film, is not a high resolution film. It's an oversaturated, over contrasty film that is brilliant up to around 20-30 lp/mm then takes a steep dive.

At 50 lp/mm, close to the resolution limit of the 1Ds , but actually slightly higher, the MTF of Fuji Velvia is an unimpressive 35%. Even a good 35mm lens is not going to be too hot at 50 lp/mm and an MF 6x7cm lens even worse.

We don't generally get MTF charts of lenses at 50 lp/mm because they would be just too embarrassing. You'd get Canon wide angle lenses that had zilch MTF at the edges. Not good for sales.

My guess is, by using Fuji Velvia as the choice of film for a shoot out between the 1Ds and the Pentax 6x7, you guarantee that nothing much beyond 40lp/mm on the film is relevant. Maybe there will be something there at 50 lp/mm but barely visible through the miasma of grain.

The fact is, in such comparisons the digital camera is at a disadvantage because you can't change the sensor. You're stuck with it. But you can change the film in the old fashioned camera.

Unless there's an extremely wide gulf in quality between the two cameras being compared, you can get almost any results you want with the film camera by choosing an appropriate film.

If someone would be prepared to ship me their old 1Ds, I can almost guarantee that I could demonstrate that my old Canon 50E 35mm camera can produce greater resolution, using the appropriate film of course  .
Title: Is This The End Game?
Post by: Jonathan Wienke on July 17, 2005, 06:10:11 pm
Quote
The differential factors are:

1. Subjects.  In studio digital is fine.  With a lot of low
    contrast foliage in landscapes digital simply breaks
    down by their low-pass filtering and noise reduction
    (especially with Canon).

2. Printing size.  At 13x19 there is little difference to
    95% of viewers (although not me), but at 30x40
    those of digital origin are full of artifacts while those
    from 4x5 film originals can still be examed with a
    magnifying glass for details from the long tile of
    the film's extended MTF curve.
Regarding point 1: It's only true if you don't know how to properly post-process the files. I shoot landscape and other stuff with lots of fine detail on a regular basis, and digital (1Ds and 1D-MkII) captures much more fine detail than I ever got from 35mm film.

Regarding point 2: This is utterly ridiculous; you're comparing 35mm digital and 4x5 film, and concluding film is better. If you're going to compare apples to apples, compare the 1Ds and 35mm film. 4x5 film has nearly 15X the image recording area of the 24x36mm frame size of the 1Ds, and if you're using a reduced-format DSLR like the 20D, D70, etc. the mismatch becomes even more ridiculously lopsided.

As to the rest, what you're saying is that you prefer the look of film's image artifacts to digital's obviously higher image fidelity; and you're also confusing the obvious film grain in the 6x7 scans with true detail. Your statement that "Too little noise in landscape and nature photography creates a sense of steadiness, unnatural and lack of depth or dimensionality" is simply absurd. When I look at a waterfall or tree with my eyes, I don't see noise patterns of film grain or anything else, I see the tree or waterfall. Film grain and sensor noise are aberrations introduced by the camera that veil and obscure the underlying image, and most people agree that the less such obscurations intrude into the final image, the better, unless their introduction is done deliberately for a creative effect. As a professional photographer, I prefer to start with an image that is as faithful to the original scene as possible, and add only whatever creative effects I deem appropriate to the image, rather than have them imposed on me by the idiosyncracies and limitations of the image capture device.

As to digital lacking "depth" or "dimensionality", again that's a failure to understand the difference between digital capture and film, and how to properly post-process digital images to bring out their best.

As to citing the DPReview forums as a source of reliable and accurate information, I'd recommend not doing that in the future, as the DPReview forums contain more misinformation and outright foolishness than pretty much anywhere else on the web. It's the Weekly World News of photography. The main site is accurate and well-informed, but the forums are beyond help.
Title: Is This The End Game?
Post by: BernardLanguillier on July 18, 2005, 09:16:13 am
Quote
Each grain of silver in a fine grain film is 1 to 2 microns in size, while a digital sensor may be 5-8 microns. One would assume from this that film can outresolve digital.

But, any individual grain can either be on or off, black or white. It takes 30-40 grains in a random clump to properly reproduce a normal tonal range. On the other hand each individual pixel can record a full tonal range by itself.
Michael,

Although I agree with your conclusions, isn't is true that film being multi-layered, each grain is actually a 3 (or 4) * binary device?

This could probably be more or less directly compared to a Foveon device, but most sensor being Bayer based, their actual resolution is 3 times lower than their naming would suggest.

Please correct me if I am wrong.

Regards,
Bernard
Title: Is This The End Game?
Post by: Bobtrips on July 18, 2005, 06:52:32 pm
Quote
To my eyes the image is FULL of interpolation artifacts and
gross oversharpening.  It tend to hide the low contrast
details in hairs and foliage, and make them jump out
suddenly when the local contrast passes a threashold.
Leping -

How about helping me out here.  

What are you calling "interpolation artifacts and gross oversharpening"?
Title: Is This The End Game?
Post by: Bobtrips on July 18, 2005, 09:41:15 pm
Quote
I see what leping is talking about. The image has a slightly brittle, overly bright and contrasty appearance as opposed to the more mellow and natural, smooth gradations one would expect with MF film. The image 'jumps out' at you in a startling fashion.
So one would need to apply a bit of blurring to the image and decrease the contrast to make it more film like?
Title: Is This The End Game?
Post by: Ray on July 19, 2005, 01:12:43 am
Quote
Clark stated clearly that there are huge resolution gap
between B&W, color negative, and pro slide films, and I
agree both the D2x and 1DsII are beyond the 645 print
film level.  However for Velvia it is very different.  What
made me speak was nothing but Michael's stretch that
the P45 will beat scanned 8x10 in all cases.  
Well, once again I would agree with much of lepingzha's observations, but the counterbalance has to be made.

Velvia should produce very impressive results with 8x10. Within the resolution limits of most 8x10 shots at say f64, Velvia actually boosts the contrast. If the large format lens has an MTF of say 50% at 15 lp/mm, Velvia will make it 55% or more.

Subtle improvements can be made at great cost, whether it's film or digital. Digital is winning because over all, taking all costs into consideration, it's a more economic medium.

As Bill Clinton would have said, "It's the economy, stupid".

I've been through an obsessive stage of Hi Fi fascination. I know how expensive it is to get marginal increases in sound fidelity.

Ultimately there's no point. In the medical arena there's always a point because lives are at stake.
Title: Is This The End Game?
Post by: BernardLanguillier on July 19, 2005, 09:52:59 am
Quote
"but the information captured is without any possible doubt 3 times less."

Bernard,

This is absolutely not the case. A Bayer matrix reduces resolution by a maximum of 30%. This is basic digital imaging 101. Please do some reading before making catagorical statements which are seen to be untrue by anyone that has a bit of exposure to the available literature.

Michael
Michael,

Thanks for your feedback. I don't think that you read me well enough though.

Pure information theory dictates the fact that a Foveon sensor with 3 million pixels has 9 million photosites, which is 3 times more than a 3 million pixel sensor using bayer interpolation. It will therefore deliver 3 times more information.

I am fully aware that the resolution will not be 3 times higher, since resolution is very much influenced by the tone which is captured in each an every of the 3 million pixels of the Bayer sensor, just like it is by the Foveon sensor. But I didn't write "resolution", I wrote "information", which is the same as "data" in my mind.

Within its range of sensitivity (that will of course be lower), the Foveon sensor has the clear potential to offer much better color purity and a lot less artifacts thanks to this. To what extend this will be visible remains to be proven, but I never wrote or implied anything on this, did I?

Anyway, the underlying fact is that resolution is by itself not good enough a metrics to define the amount of information captured by a digital imaging sensor. Another 101 I guess...

Regards,
Bernard
Title: Is This The End Game?
Post by: etmpasadena on July 19, 2005, 01:43:38 pm
Real Quick:

(1) Foveon (or its senior scientists) have never claimed anything like a 3x resolution advantage for the Foveon sensor over a Bayer. Don't confuse what Foveon fanatics say with what the company actually says.
(2) Foveon has only made three main claims with regard to its X3 chip: (a) greater per pixel sharpness (they don't use the word resolution); ( almost 100% immunity from moire; © more pure color (whatever that means.)

People can argue about color. But comparing an SD9/10 against a D30 will show the sharpness difference. And you can shoot fabrics all day without worrying about moire.

People should keep in mind that Foveon's first single capture chip was produced in 1999 (with intellectual work done much earlier). That's a long time ago. That basic 1999 design was what was shopped around to the camera companies. I happen to have photos from that prototype. I can assure you that in 1999/2000 it was way ahead of what the Bayer camp had. Of course Foveon still needed about 1.5 years to refine and produce their first SD9 chip. Of course now that Bayer has advanced as it has the advantages of X3 technology probably make less sense to most photographers. But back in 1999 it really did make sense.

I should add that in February of 2005 ISO came up with its definition of pixel--the new defintions don't change things much or clarify anything regarding how CFA/Foveon/Fuji sensors count pixels or photosites. But they do say that while pixels can be counted, resolution can only be measured, and the two are not the same. Of course on this board we all know that. But it's nice for it to be official.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 20, 2005, 10:33:18 pm
Quote
I think Bernard has a good point about image manipulation causing a loss of 'real' information. It's not about rounding errors in floating point arithmetic. Suppose you take a picture of a test chart using a 12mp camera ( say a 1Ds ) and a 3mp camera ( say a D30 ). Now downsample the 1Ds image 2:1 to make a 3mp image. Then subtract this from the D30 image. What we have is then an 'error image' for the D30. You could measure this magnitude of the error as the standard deviation of the difference over all the pixels.
That would be totally useless, because if you didn't do the exact same sharpening, curves, creative tweaks, and other processing to both images, there would be a difference as a result, and you'd have no real way of proving which camera was "right". You can make a difference mask, but that cannot tell you which image is right, or even if either image is right; it can only tell you how much the two images differ.

The entropic data loss Bernard was referencing is all about rounding errors, and functions where multiple input values can result in the same output value. Once data has passed through such a function, it cannot be known with certainty which input value was responsible for the given output value. This does start in the lowest-order bits, and gradually works its way into the higher-order bits as more edits (curves, levels, and suchlike) are performed. For example, if you're doing a curve adjustment in 8-bit mode where levels 8, 9, 10, and 11 all get mapped to the output value of 5, you have just destroyed the least significant 2 bits of deep shadow values because you are mapping 4 input values to 1 output value. As you continue to perform more edits, you're gradually corrupting and destroying the information in progressively higher-order bits, but it's not a completely linear progression. IIRC, if you have a sequence of edits that each cause 2 bits worth of entropic loss, you need to do two edits to lose 3 bits worth of information, four edits to lose four bits, sixteen edits to lose five bits, and the progression continues with each additional bit requiring the square of the previous bit's number of edits to be destroyed. Most Photoshop edits cause 2 bits of entropic loss or less, so when editing in 8-bit mode this loss can become significant enough to manifest as visible artifacts, usually banding or posterization. But when editing in 16-bit mode, you can do many more edits with a higher entropic loss per edit before visible degradation occurs because the entropic degradation always affects the lowest-order bits first, and the image information always lives in the highest-order bits. So if you can add extra bits to the data, even if it's simple zero padding (which is what happens when you convert an 8-bit file to 16 bits), you can edit in 16-bit mode and destroy less real image information while doing so.

Here's a practical application of this information theory crap, an experiment you can perform for yourself to see all this in action. Open a JPEG image that is already reasonably well-processed (my little girl pic would be a fine candidate), convert it to 16-bit RGB mode, and do a series of random curve and/or level adjustments to screw it up and then put it right again. An easy one would be a level adjustment where you don't change anything but the gamma control (the third slider in the middle). Do 10 random gamma tweaks between 0.5 and 2.0, with the last one or two designed to return the image as closely as possible to its original state. Then convert back to 8-bit mode, while recording these tweaks as an action. Save the tweaked image in a new file. Now reopen the original image, leave it in 8-bit mode, and run the action you just recorded. Save as a third copy. Now open the copy that was tweaked in 16-bit mode and compare its appearance to the one that was tweaked in 8-bit mode. Both files had the exact same number of bits destroyed by the level tweaks, but the bits destroyed in the tweaked-in-16-bit-mode file were the zero bits padded on to the real image information when converted from 8-bit to 16-bit, (and thus were no real loss) while the bits destroyed in the file-tweaked-in-8-bit-mode were actual image information, resulting in a visible degradation of image quality. While you're at it, compare histograms.
Title: Is This The End Game?
Post by: Mark D Segal on July 21, 2005, 08:43:47 am
Jonathan, John Carlin, perhaps you can help me with this conundrum (at least in my mind) - notwithstanding all the erudite technical background in this thread - I am still having trouble seeing how demosaic algorythms impact on image resolution; I think this thread started on the theme of where sensor and lens technology was going to end in respect of maximizing attainable resolution. I sense that concepts of bit depth, image compression and demosaic algorythms are being comingled - I can see the relevance of bit depth and data compression in a discussion about apparent resolution (i.e. printed image detail), but with today's advanced algorythms for demosaic-ing color data it is less clear to me how this impacts on apparent resolution.
Title: Is This The End Game?
Post by: lester_wareham on July 22, 2005, 05:36:28 am
Quote
News of a 39MP medium format back makes me wonder if the digital growth curve, at least in terms of pixel count, is starting to flatten out. I'm finding that with a Canon 1Ds Mk II it's the available wide-angle lenses and my ability to hand hold that's the limiting factor, not the sensor.

Anyone any thoughts? Are we approaching the pixel count end game, and where's the practical limits for 35mm and MF?
I guess for 35mm 25mp is probably the maximum useful image file size in terms of extracting detail. 16Mp must already be in the diminishing returns region.

However, larger resolution sensors will permit less harsh anti-alias filters, or even none. This then followed by good quality digital low pass filtering and down sampling to the required image size.

The advantage would be no need to sharpen to compensate for the anti-alias filter.

This approach has been used for years in electronic digital conversion (CD players, satellite receiver front-ends), it's called over-sampling. You still retain all of the advantages of low noise in the down sampled image.

This could be how things will develop marketing wise.
Title: Is This The End Game?
Post by: RichardChang on August 09, 2005, 02:43:54 am
Where will the game end?  The game won't end, technology will provide more and more and more because that's what technology does.  How can Canon sell the same old stuff?  They can't sell the same old stuff but they can sell new stuff.

Where will sensors top out?  They may not, at least anytime near term.  Our society's consumption of technolgy will have to say Enough!, if it's to top out.  I'd think that won't happen anytime soon.

As to the comparison between MF backs and 35mm sensors, it's practical that the sensor technology will evolve to be roughly equivalent, even though the camera back sensors currently have a signal-to-noise advatange.  At the end of the day, the MF backs will have a minimum of double the image area of 35mm, so a digital back will deliver twice the file size and twice the spatial resolution, with everything else being equal.  The difference should be remarkably like the difference between 35mm and 120mm film.

Richard Chang
Title: Is This The End Game?
Post by: Ray on August 19, 2005, 12:57:55 am
Quote
BJL, Ph. D. in and professor of Applied Mathematics, author of various publications in physics/optics journals, and cynic about people who try to bolster their arguments by flaunting academic credentials
Good point! I always like to judge the merits of an argument on the validity of the points made, rather than the academic qualifications of the person making the points.

When I'm incapable of understanding the point, then I'm in a quandry  :D .
Title: Is This The End Game?
Post by: Ray on September 09, 2005, 09:38:43 pm
Quote
Quote
Wrong or at least irrelevant because you ignore that noise is a mixture of positive and negative variations around the "true" value, so that when signals are merged, ther is some cancellatio of positive and negative noise values, so that total noise increass less than in proportio to the number of signals combined.

BJL,
Okay, so you have just articulated the other probabilistic theory I was referring to. I understand the logic of this, but I still have a problem with your distinction between 'wrong' and 'irrelevant'.

Binning serves a very useful purpose. You can reduce both 'read-out' noise and photonic noise, but at the expense of resolution. This is basically the same advantage that any sensor has with larger pixels, except binning gives you the choice on the same sensor.

If we are talking about photodetectors no smaller than the lens in use can resolve, then my example is not only relevant but 'not wrong'  :D .

However, there is an important concept here, in your argument, that's relevant to oversampling. Lots of little pixels, smaller than the lens can resolve, will not produce more 'over all' photonic noise for the reasons you've just explained. But there could be an increased 'read-out' noise problem, unless of course you start binning the pixels.

Is there any advantage to binning, over a single pixel the same size as the cluster, with regard to chromatic aberration and birefringence?
Title: Is This The End Game?
Post by: samirkharusi on July 17, 2005, 12:09:18 am
Physics says that it'll always be a trade-off between effective ISO and resolution. The technology at any point in time just shifts the limiter from being the sensor to the lens and vice versa. I am very happy with my Canon primes with focal lengths from 100mm upwards. They can be used wide open even in astrophotography. On a 1Ds (never mind a 1DsII) I have to close down the shorter primes for astrophotography quite substantially to get satisfactory results, eg 50/1.4 has to be used at f5.6, 28/1.8 at f8... I think Canon really has to redesign their shorter primes if they wish to jack up to 40+ mega pixels on a 35mm format sensor. I do not see that their current short lenses can justify 40 megapixels. I believe that a diffraction-limited f8 lens (of any focal length) may justify pixels as small as 2 microns square (for Nyquist critical sampling). Smaller than that and there is not much to be gained. I.e. if the pixels had sufficient sensitivity, then a perfect f8 lens would be served very amply by a maximum of 200 megapixels on a 35mm format sensor. There's still lots of playroom left, but I think currently the limitation is in the shorter lenses. The pixels also need enhancement in sensitivity, so it remains a race. I once did an experiment on using Nyquist (that 200 megapixel equivalent) and much greater over-sampling to see if one gains anything by going in excess of Nyquist on planetary imaging:
http://www.geocities.com/ultimaoptix/sampling_saturn.html (http://www.geocities.com/ultimaoptix/sampling_saturn.html)
Personally, for landscapes I'd settle for a small set of f8 diffraction-limited primes and a 100 to 200 megapixel 35mm format sensor  :D
Title: Is This The End Game?
Post by: samirkharusi on July 18, 2005, 10:28:03 am
Quote
This could probably be more or less directly compared to a Foveon device, but most sensor being Bayer based, their actual resolution is 3 times lower than their naming would suggest.

Please correct me if I am wrong.

Regards,
Bernard
This is a myth perpetrated by the Foveon crowd. A Bayer array does NOT lower resolution a factor 3x, EXCEPT if you are imaging in primary blue or primary red. With any normal, mixed color subject or lighting there is remarkably little loss of resolution. It's actually very easy to verify by just checking out those resolution charts that DPReview puts out for all the DSLRs. I have found that typically, in white light, the combination of Bayer array and anti-aliaising filter in a Canon DSLR lowers resolution to roughly 85% that the pixel pitch should be capable of. I have obtained that %age from my own chart testing and it seems to agree with DPReview measurements. Whether that 85% is due to the Bayer or the anti-aliaising filter I cannot be categorical. Nevertheless when you remove the anti-aliaising filter from the Canon DSLR (I have a 20D with the filter removed) the image sharpness at 1:1 display "looks" enhanced. I have not bothered to verify whether it just "looks" sharper or it actually resolves more. I suspect a bit of both. I have not chart-tested the 20D without the anti-aliaising filter; perhaps one day when I am really, really bored...
Title: Is This The End Game?
Post by: on July 18, 2005, 07:15:16 pm
"but the information captured is without any possible doubt 3 times less."

Bernard,

This is absolutely not the case. A Bayer matrix reduces resolution by a maximum of 30%. This is basic digital imaging 101. Please do some reading before making catagorical statements which are seen to be untrue by anyone that has a bit of exposure to the available literature.

Michael
Title: Is This The End Game?
Post by: lepingzha on July 18, 2005, 11:58:41 pm
Just for completeness, here is the link to the "add noise to
make D2x image more film like" post in DPreview forums:

http://forums.dpreview.com/forums....3552930 (http://forums.dpreview.com/forums/read.asp?forum=1021&message=13552930)

I only mentioned that to my eyes the added noise made
the image look more natural.  This has nothing to do
with the general quality of DPreview forum discussions,
to anyone with a sound sense of logic.  At least nothing
was labeled "bull" in the DPreview forum...

Don't take me wrong that I shoot both film and digital
and I have been speaking with my experiences with
Lightjet/Chromira printing which is already more
tolerate to digital artifacts than inkjet.  Raw conversion
algorithm and post processing are crutial to the image's
feel, both on screen and on prints, which makes the D2x
images much more film like than those from 1DsII.
My master images has from dozens to near hundred of
local contrast enhansing masked layers which applies
to both film and digital images, from the process I learn
the limitations of the both technology.

Clark stated clearly that there are huge resolution gap
between B&W, color negative, and pro slide films, and I
agree both the D2x and 1DsII are beyond the 645 print
film level.  However for Velvia it is very different.  What
made me speak was nothing but Michael's stretch that
the P45 will beat scanned 8x10 in all cases.  Again this
is a discussion group of landscape photography, and
over 90% of the film based landscape photographer
shoot chromes not negatives.

Look again at Michael's 1Ds vs. Pentax 67 comparison:

http://www.luminous-landscape.com/reviews/shootout.shtml (http://www.luminous-landscape.com/reviews/shootout.shtml)

He first said the digital crop appears to have lower
resolution because it is enlarged too much to match the
scanned film crop.  Two paragraph later he concluded
that the 1Ds has a better resolution than the scanned
chrome.  Maybe this is just the logic and science of the
new age?

Drum scans emphersize film grains.  Try the Imacon
FlexTight 949 where they made improvement over
848/646 to make the grains much less visible while
retain the image details.  Their FlexTouch software
dust and scratch removal feature is better than
digital ICE and does not slow the scans down, based
on my experience too, so my scanned skys are almost
as clean as those from digital.

I am a medical imaging expert working in a major R&D
lab.  What I learned is that any sharp cut of the long
response curve tail, be it the spatial frequency expressed
in MTF, or a natural sound's extended harmonic structure,
creates unnatural artifacts.  Lens and film both have long
MTF tails, and the 10um grain size is the statistical mean
not the size smallest contributing elements.  As I mentioned
it is exactly why after 20 years audio engineers are learning
the 40-80KHz components are so crutial to the base tone
definition and impact making wavefront building so that their
inclusion in SACD or DVD-A are essential -- our ear can not
hear the pure harmonics but our skin will certainly feel the
sharper rising edge of the air pressure change.  The abrupt
cut of the MTF in digital capture at the Nequist frequency
(as well as the total artifacts beyond that) has a similar
effect to the normal CD's abrupt cut of the audio frequency
at 20kHz, which contributes to their harsh sounding well
known to many (not the MP3 generation).

The 30% resolution lost from Bayer interpolation is statistical,
and as with any statistics there are exceptions.  For an example
of interpolation failure checkout Clark's color pattern capture
comparisons:

http://www.clarkvision.com/imagedetail/fil...xl.digital.html (http://www.clarkvision.com/imagedetail/film.vs.6mpxl.digital.html)

Bye everyone.

Best regards,
Leping Zha, Ph.D.
www.lepingzha.com
Title: Is This The End Game?
Post by: BernardLanguillier on July 20, 2005, 12:11:45 am
Jonathan,

Thanks for these examples.

I was already aware that there are ways to improve the appearance of images taken with a Bayer sensor. I am saying appearance, since each of the transformation you apply does of course reduce the actual informational content of the image. Fortunately, this loss is more than compensated by the morphing into a more eye pleasing look.

Your previous point was about the poor informational content of the Foveon generated data. That's the part I am most interested in to be honnest with you.

Cheers,
Bernard
Title: Is This The End Game?
Post by: BernardLanguillier on July 21, 2005, 01:54:12 am
Quote
This is pixel peeping to the Nth degree. That's fine with me. Just bear in mind that the final results on the print will probably ignore most of the subtleties discussed, and if they don't, our eyes will.
Ray,

I think that we all agree with you. The engineer in me in speaking in this thread, the photographer is sleeping somewhere...

As I said, I am in no way an expert in image processing or binary logic. I have only been trying to confront my common sense and understanding of basic physical mechanisms with the knowledge of those in the know.

Cheers,
Bernard
Title: Is This The End Game?
Post by: BernardLanguillier on July 21, 2005, 06:54:42 pm
Quote
If you read up on information theory, you'll discover that not only is this indeed the case, but it is provable to the point of being axiomatic. Whether it is the result of a degree of uniformity or redundancy in the image data (which is always the case in real-world images) or some other factor is irrelevant. The legitimate presence of patterns or uniform areas in an image still reduces the amount of information required to describe the subject with whatever level of accuracy you choose to specify.
Jonathan,

You are of course correct, but my point about the lossless compressibility of an image was that it doesn't proof anything in terms of the gap between information and data RESULTING from the demoisaicing alone, which was our initial topic.

I do also agree with you that no information will be created in the process of demoisaicing and gamma application from 12 to 16 bits, but my only point was that it is impossible to measure this by checking lossless compressibility rate because of the other reasons why the initial 12 bits image itself can be compressed.

I am well aware that legitimate areas of uniformity contain less information than the un-compressed data size would suggest, and this is the very reason that I was evokating.

Regards,
Bernard
Title: Is This The End Game?
Post by: lepingzha on July 17, 2005, 01:19:37 am
For those deeply in the topic I recommend the scientific study
over Bayer array based digital capture vs. film, on both
resolution and dynamic range, at ClarkVision.  You can start
from the equivalent digital resolution chart at:

http://clarkvision.com/imagedetail/film.vs.digital.1.html (http://clarkvision.com/imagedetail/film.vs.digital.1.html)

which states Michael's projection that the P45 would provide
scanned 8x10 film quality highly improbable.

Leping Zha
Landscape Photogrpher and Ph.D. in Physics
www.lepingzha.com
Title: Is This The End Game?
Post by: billh on July 18, 2005, 09:45:01 am
“the dithering introduced by film's grain structure significantly reduces the effective usable lp/mm for any subject matter that is not high-contrast black & white, typically by a factor of 3 to 5. In contrast,..”

I think this explains what I was so curious about, which is why is a print of a 1Ds2 resolution chart so much superior to the print from the higher resolving film? Of course this really makes me wonder what I will see from the significantly higher MP sensors like the forthcoming Phase One 39MP back.

Can anyone explain the correlation between MP and resolution? Apparently pixel size plays a part too? As well as filters, software, etc.? Does it follow (or not) that a 16 MP camera (given the same quality optics) will resolve more than a 10MP camera? I remember being amazed at the quality I was seeing when I bought a 1Ds a few years ago. Michael’s announcement about the appearance of a 31 and 39 MP back surprised me too. It will very interesting to see where all this goes over the next few years.
Title: Is This The End Game?
Post by: Ben Rubinstein on July 18, 2005, 07:24:20 pm
Undersharpened if anything at all. I compiled a photoshop action that sharpenes the non edge area as well so that there is pore definition on the face while sharpening the edges more vigirously for a more film like 'look'. I can't say that I'm a huge fan of the unresolved plastic look given to skin by these sharpening programs, seems a shame to waste all that resolution and is hardly necessary for a little kids face. That pic would look soft in print, certainly not oversharp as I doubt it has had print sharpening applied yet.
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 11:45:57 pm
Quote
The lack of detail in some white checks would, to me, indicate a bit of overexposure.  
Jonathan! Overexposing an image! Do you realize what you are saying? Is this even conceivable  :D .
Title: Is This The End Game?
Post by: Jonathan Wienke on July 20, 2005, 01:25:41 am
That's fairly easy. There's only marginally more visually useful image information in a Foveon image (exactly how much more is debatable, but it's safe to say that it's somewhere between 120% and 200% of what's contained in a Bayer file) but it's buried in 300% of the data of an equivalent Bayer RAW file.

A minor quibble: Saying that "each of the transformation you apply does of course reduce the actual informational content of the image" is not correct when working in 16-bit mode. When you open a 12-bit Bayer RAW file and expand it to 16 bits per channel (actually 15 data bits and a sign bit) RGB mode, you are expanding the data, but not the amount of real information. You've expanded approximately 8 bits of real image information per pixel (remember that Bayer RAWs can be losslessly compressed to about 2/3 of their original size) into 45 bits per pixel (15 bits x 3 channels) of RGB data. So unless you're doing enough editing to accumulate 7 bits worth of rounding errors per color channel (which would only ever happen if you're adding fairly healthy doses of film grain or other noise to the image) the only thing actually "lost" during editing is the non-informational data that was created when initially converting the RAW data to RGB format.

Think of a 15-inch cylinder with 8 inches of oil in it. That's the honest-to-goodness genuine image information. Now add 7 inches of water so that the cylinder is full to the brim. That's equivalent to converting the RAW file to RGB. Now drain off 3-4 inches of water from the bottom of the cylinder. You've just done enough editing to create enough of a "toothcomb" effect in the histogram that only every 8th or 16th level is populated. (That's pretty tough to do in a normal image editing workflow; even working in 100% 8-bit mode, you rarely get gaps more than 4-8 levels wide unless you're doing really extreme level or curve adjustments.) You still have 7 inches of oil, it's just sitting on less water (non-informational data) than it was before. Does that make sense? Or have I bored everyone to sleep already?
Title: Is This The End Game?
Post by: Jonathan Wienke on July 21, 2005, 05:47:47 pm
That's why I did the file size comparison before doing the sharpening. I agree that the Bayer image probably has some demosaicing artifacts and other things that are difficult or impossible for the compression algorithm to distinguish from real image information. (I'd even go so far as to say that the inability to distinguish such artifacts from true image information is probably an attribute of a well-written RAW converter.) The reason the sharpened files are larger is certainly because the sharpening process "created" bits that while not real image information, cannot be distinguished from real image information by the compression algorithm, and therefore increase file size. I don't know that thaere is any way to measure the amount of "true" information in an image file, it would certainly require a much better definition of "image quality" than currently available, so that differences between my test crops could be meaningfully quantified beyong general observations regarding appearance and JPEG2000 file size. I'm guessing that there is an information theory doctoral thesis on the subject begging to be written, but I don't have the advanced math background and other skills necessary to devise such a thing.
Title: Is This The End Game?
Post by: Jonathan Wienke on July 21, 2005, 01:38:20 am
Quote
This is pixel peeping to the Nth degree. That's fine with me. Just bear in mind that the final results on the print will probably ignore most of the subtleties discussed, and if they don't, our eyes will.
Actually, all this theory has a practical application; try the 8-bit/16-bit JPEG level tweaking exercise I outlined previously as a demonstration. This is the theory underlying the practical reasons for editing as much as possible in 16-bit mode instead of 8-bit mode; an (admittedly long-winded) explanation why 16-bit editing is better than 8-bit editing.
Title: Is This The End Game?
Post by: drh681 on July 17, 2005, 05:31:11 pm
Quote
Forget anything else, why doesn't someone come through with a decent solution to expanded DR in the highlights? As MR has pointed out, it's not necessarily about the pixel count any more, even at the high end.
Fuji has. see the S 3Pro.
It is not a "big chip", but definetly designed to adress the "highlights issue" I suspect that the Idea will not go un noticed.
Title: Is This The End Game?
Post by: Ray on July 18, 2005, 12:31:12 pm
Quote
So a 100% MTF target will be recorded bettter by film (because the edges are either black or white) but with anything photographed in the real world digital's advantage in this respect immediately becomes clear.
I understand your point, Michael, and I certainly have no strong desire to go back to film. Haven't shot any film in years. But high contrast targets do exist in the real world; car number plates, menus on restaurant windows, strands of whispy grass glinting in the sunlight, all manner of documents that don't easily fit onto a flatbed scanner etc.

I merely make the point that if you wanted to demonstrate the superior resolving power of 35mm film compared with the 1Ds, you could probably do so by choosing your film and subject matter carefully. As always, it's the best tool for the job that's important and there's no doubt that a 1Ds is a better tool for most photographic jobs than any type of 35mm film.

It's a pity that Kodak have discontinued Technical Pan. I wouldn't be surprised if this film could compare quite favourably with the 1Ds with low contrast subject matter.
Title: Is This The End Game?
Post by: Jack Flesher on July 18, 2005, 08:50:37 pm
Quote
This is a myth perpetrated by the Foveon crowd. A Bayer array does NOT lower resolution a factor 3x, EXCEPT if you are imaging in primary blue or primary red.

~SNIP~

I have found that typically, in white light, the combination of Bayer array and anti-aliaising filter in a Canon DSLR lowers resolution to roughly 85% that the pixel pitch should be capable of. I have obtained that %age from my own chart testing and it seems to agree with DPReview measurements.
Amen -- and a brilliant marketing campaign by Foeveon to be sure. However isn't it interesting that their real-world results have yet to match the hype?  ;)

FTR, my own experiences echo the 85%, but only with the best raw converters.
Title: Is This The End Game?
Post by: Bobtrips on July 18, 2005, 11:23:33 pm
Quote
Quote
So one would need to apply a bit of blurring to the image and decrease the contrast to make it more film like?
I don't think one would need to apply blurring. Jonathan has already stated that he applied mid-tone sharpening. Maybe less mid-tone sharpening would help. As regards contrast, I see an unnatural lack of detail in some of the highlights of the white grid pattern on the girl's dress. It's like a dress that has its own illumination. But as I said, I accept the fact that Jonathan has just exaggerated an effect to make a point.
The lack of detail in some white checks would, to me, indicate a bit of overexposure.  The detail is held in the red/white checks.

I'm still looking for an explanation of "FULL of interpolation artifacts and gross oversharpening".  Seems like someone with a Ph.D. in physics could describe what they see in fairly precise terms.

(I am willing to guess that at this level of enlargement one is beginning to see a bit of moire/pixelization - I'm still trying to sort that out - along the edges.  Just as when one enlarges film too much and starts to see grain patterns.)
Title: Is This The End Game?
Post by: Mark D Segal on July 19, 2005, 11:11:24 am
OK Bernard, points taken - but don't they have a Geisha District in Nagoya?   (or is Luminous Landscape more correct fun?)
Title: Is This The End Game?
Post by: BernardLanguillier on July 20, 2005, 03:14:14 am
Hi Jonathan,

Regarding Foveon, where did you get these 120 to 200% values from if I may ask? Aren't those figures experimental data that were impacted by the noise resulting of the Foveon implementation of the multi-layer sensor idea? I still don't see any theoretical justification for those.

Besides, interesting example, I believe that I understand your point.

On the other hand, most image manipulations affect the whole range of densities, and not just these empty bits that were added when converting the RAW file into a 16 bits space.

Your example would probably be closer to the reality if water and oil were mixed to create a suspension. Removing a fixed amount of liquid on the top would affect less oil than if no water had been added, but it will still affect some.

I agree with you that the result will most probably have zero visual impact, and that the image will in the end probably
look better as I wrote above, but basic entropic theory can be applied to image manipulation. If 2 things have become the same at a given moment in time, the information that existed in the form of their initial difference (potential energy) will never be recovered if it once lost.

Anyway, this purely theoretical discussion having no impact on actual photography, we could probably agree that our positions are close enough?

Regards,
Bernard
Title: Is This The End Game?
Post by: Jonathan Wienke on July 21, 2005, 01:21:40 am
Quote
Just a confirmation. I assume that you call "real image information" the most significant 8 bits based on the assumption that both display and print are 8 bits devices?
No, I'm basing that on the fact that 12-bit RAW files can be losslessly compressed to about 66% of their original size, therefore there is only approximately 8 bits per pixel of non-redundant image information in a typical Bayer-sensor RAW file. This varies somewhat depending on subject matter and ISO, but that's a whole 'nother discussion.

Quote
Besides, one question. When a 12 bits RAW image is converted into a 16 bits tiff, my understanding is that a mapping was performed so that the max value in 12 bits (11111111 1111) becomes the max value in 16 bits (11111111 11111111). One could think that this would leave 4 bits of un-used values throughout the range, but my understanding is that this is mostly not the case since:

- the demozaiquing is basically an averaging process whose ouput does benefit from the additional set of values existing in 16 bits compared to the 12 bits input,
- the gamma application,
- ...

-> the result of the RAW conversion is probably most of the time a fully populated 16 bits file, not just a file whose 12 bits are populated, and then zeros added.

Do you agree with this?

Yes, so far we are in agreement.

Quote
Although I agree with you that the least significant bits will be affected first, those do contain useful image information if my understanding of the 12 -> 16 bits mapping is correct. I agree with you that the impact of these least significant bits is by definition very small, but IMHO actual image information will be lost even when working in a 16 bits mapping of an image generated by a 12 bits sensor.

That's where you're getting off track. While it is true that the demosaic/interpolation process can generate discrete values on the lowest-order bits, (the lowest 4 bits are not always 0000 or 1111 or whatever) that does not mean that any actual image information is going into those bits. What is going into those bits is guesswork from the demosaic/interpolation algorithm and not actual image data, unless you're doing something really weird like converting with the exposure set to -4 stops and the entire image is being crammed into the left side of the histogram. Because of that, the data in those bits is no more "real" than if you simply padded all the binary values with 0000 or 1111 to make the 12-bit to 16-bit transition. They're just calculated to be more visually pleasing than 0000 or 1111. As such, they are just as expendable with regard to edit-induced entropic losses as 000 or 1111 would be. The real image information is still safely tucked away in the high-order bits, but now with an additional 4 bits of mathematical guesswork and fakery to absorb the entropic destruction of typical image editing.
Title: Is This The End Game?
Post by: BernardLanguillier on July 21, 2005, 09:32:18 pm
Jonathan,

Yep, I think that we nearly agree on everything.

The only part that of which I am not 100% convinced yet, but I am not saying that you are wrong, just that you didn't convince me yet () is the part where you are seemingly saying that the first 2 editions of a 16 bit RAW converted file (resulting, for the sake of discussion, in the loss of 2 bits each) will only damage artificially created data that was not present as information in the 12 bits RAW file in the first place.

I am convinced that the edition of the 16 bit file will have little practical impact on the image, but I still don't see clearly why you can claim that no actual image information is stored in those 4 bits of least importance that will be killed first.

I can take the math if needs to (or at least I could a few years back...).

Don't worry if you don't have the time for this, this really has become bit peeping now...

Cheers,
Bernard