Luminous Landscape Forum

Equipment & Techniques => Medium Format / Film / Digital Backs – and Large Sensor Photography => Topic started by: rainer_v on December 19, 2006, 11:46:55 pm

Title: larger sensors
Post by: rainer_v on December 19, 2006, 11:46:55 pm
althoug it seems to be a question of the time ( over-next generation of sensors ? ) that the sensor sizes could increase to 6x6cm , wouldnt it be more logic and maybe even more usefull to enlarge future sensors to real 4,5x6 format of 41x56mm instead 36x48mm?
the size would be 1,35 bigger, all systems and cameras would be remain still usuable.
Title: larger sensors
Post by: rethmeier on December 20, 2006, 12:35:33 am
That's were the Hy6 will be unique!
I will be able to accept a large square sensor!
Cheers,
Willem.
Title: larger sensors
Post by: BJL on December 20, 2006, 04:20:26 pm
Quote
That's were the Hy6 will be unique!
I will be able to accept a large square sensor!
Cheers,
Willem.
[a href=\"index.php?act=findpost&pid=91522\"][{POST_SNAPBACK}][/a]
Compatibility with an imagined future product that will almost certainly never exist (i.e. 56x56mm sensors, or even anything bigger than 37x49mm) seems a poor reason to put up with imposing a heavy 42% or more (area) crop on all lenses, not to mention using a mirror and viewfinder that are far bigger and heavier than necessary in order to accommodate the obsolescent 56x56mm film frame. Ironically, even the first film back for the Hy6 will be 645, not 6x6.

Will some people never recognize that new digital camera bodies, lenses, viewfinder systems and such are adapting to the new format sizes that make most sense with electronic sensors, rather than sensor makers striving to improve compatibility with old lenses and bodies? Kodak and Dalsa have apparently make it fairly clear that their sensors sizes for "medium format" cameras have topped out, at 37x49mm and 36x48mm respectively.

There is no reason to think that future chip fabrication equipment will be "upsized" to more easily or economically handle super-large chips, meaning anything bigger than about APS sensors, already larger than most micro-processors and such. On the contrary, the market for chip fabrication equipment is dominated by products other than DSLR sensors, in which the dominant trend is to smaller chip sizes, for the sake of faster operation, lower power consumption and lower cost.
DSLR sensors tend to use "trailing edge" fabrication technology, with larger feature sizes and larger chip sizes than most other applications, but the fabrication equipment does not last for ever, and so eventually fabrication will have to move to equipment designed for the generally smaller sizes of newer chips, potentially making super-large sensors even more difficult to produce.
Title: larger sensors
Post by: eronald on December 20, 2006, 04:28:01 pm
As someone trained in semiconductor design, albeit a long time ago, I strongly disagree with the contents of the quoted post. Historically, wafer sizes and chip sizes have steadily got bigger. I expect sensors to get bigger, the only question is when.

Or maybe sensors will only get bigger in China, Europe and Japan, with those in the US getting smaller

By the way, I sign with my real name ...

Edmund Ronald, Ph.D



Quote
Compatibility with an imagined future product that will almost certainly never exist (i.e. 56x56mm sensors, or even anything bigger than 37x49mm) seems a poor reason to put up with imposing a heavy 42% or more (area) crop on all lenses, not to mention using a mirror and viewfinder that are far bigger and heavier than necessary in order to accommodate the obsolescent 56x56mm film frame. Ironically, even the first film back for the Hy6 will be 645, not 6x6.

Will some people never recognize that new digital camera bodies, lenses, viewfinder systems and such are adapting to the new format sizes that make most sense with electronic sensors, rather than sensor makers striving to improve compatibility with old lenses and bodies? Kodak and Dalsa have apparently make it fairly clear that their sensors sizes for "medium format" cameras have topped out, at 37x49mm and 36x48mm respectively.

There is no reason to think that future chip fabrication equipment will be "upsized" to more easily or economically handle super-large chips, meaning anything bigger than about APS sensors, already larger than most micro-processors and such. On the contrary, the market for chip fabrication equipment is dominated by products other than DSLR sensors, in which the dominant trend is to smaller chip sizes, for the sake of faster operation, lower power consumption and lower cost.
DSLR sensors tend to use "trailing edge" fabrication technology, with larger feature sizes and larger chip sizes than most other applications, but the fabrication equipment does not last for ever, and so eventually fabrication will have to move to equipment designed for the generally smaller sizes of newer chips, potentially making super-large sensors even more difficult to produce.
[a href=\"index.php?act=findpost&pid=91669\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: howiesmith on December 20, 2006, 04:37:31 pm
Quote
Compatibility with an imagined future product that will almost certainly never exist (i.e. 56x56mm sensors, or even anything bigger than 37x49mm) seems a poor reason to put up with imposing a heavy 42% or more (area) crop on all lenses, not to mention using a mirror and viewfinder that are far bigger and heavier than necessary in order to accommodate the obsolescent 56x56mm film frame. Ironically, even the first film back for the Hy6 will be 645, not 6x6.

Will some people never recognize that new digital camera bodies, lenses, viewfinder systems and such are adapting to the new format sizes that make most sense with electronic sensors, rather than sensor makers striving to improve compatibility with old lenses and bodies? Kodak and Dalsa have apparently make it fairly clear that their sensors sizes for "medium format" cameras have topped out, at 37x49mm and 36x48mm respectively.

There is no reason to think that future chip fabrication equipment will be "upsized" to more easily or economically handle super-large chips, meaning anything bigger than about APS sensors, already larger than most micro-processors and such. On the contrary, the market for chip fabrication equipment is dominated by products other than DSLR sensors, in which the dominant trend is to smaller chip sizes, for the sake of faster operation, lower power consumption and lower cost.
DSLR sensors tend to use "trailing edge" fabrication technology, with larger feature sizes and larger chip sizes than most other applications, but the fabrication equipment does not last for ever, and so eventually fabrication will have to move to equipment designed for the generally smaller sizes of newer chips, potentially making super-large sensors even more difficult to produce.
[a href=\"index.php?act=findpost&pid=91669\"][{POST_SNAPBACK}][/a]

Speculating on what will be is risky.  A mere 20 years ago, no one had ever heard of a digital camera.  Who knows where the future will be.  Bigger sensors?  Maybe even something newer than digital.

A couple hundred years ago, the head of the US Patent Office thought the department should be abolished because everything had been invented.  In the '50s, the head of IBM thought the world could support 1 0r 2 big computers.

A real probelm with adapting to existing formats is "How can APS be called 35mm?"  Camera crop factors.  36x48mm is full frame what?  Certainly not "medium format" film, which seems to be the allusion they expect consumers to buy.

Currently, the size of sensors is limited by the number that sensors thta can be fit economically onto a single wafer.  Odd and/or large sizes may create a lot of waste.
Title: larger sensors
Post by: pss on December 20, 2006, 04:57:39 pm
the more sensors are sold, the more money will be spent on developing newer, better and probably also larger sensors....
a look at canon says it all...if canon stays with "35mm" the will have to lower their prices....they pretty much cannot cram more pixels onto the existing chip, maybe 22mpix, but that's it and even that will have to come with a pricetag.....would i pay 20000 for a 22mpix 12bit (or even 16bit) camera system with a small finder and lenses already pushed to the limit with 16mpix sensors?
with prices for DMF coming down and a P21 and P30 available at pretty much competing prices (for the whole system) and much better file quality?
i think that the DMF market is working out some kinks right now, but someone will survive, there will always be a market there....and there are already much larger chips available, for military and such...so it is only a matter of time when it will come into the photo pro marketplace...
when is the logical update to the P45+? the P55++?kodak will reach a limit just like canon with how many pixels fit...and if you reach that limit...you make the sensor bigger...there is rom to grow for the next 5-10 years...and there are a lot of RZs out there...and don't forget that a lot of superhigh end editorial, advertising and art photographers still shoot film BECAUSE they want the size and they feel that there is room for improvement in quality as well.....there always is....
bytheway: the aptus and emotion lines are numbered not for their mpix, but for the diagonal of the sensor....A65, A75,??? and they both use dalsa chips and they both started this at the same time....maybe dalsa is closer to the 85 then we think?
one thing is for sure: if anyone comes out with a larger sensor within the next 2 years, hasselblad really screwed themselves by making the finder and lenses smaller to accommodate the smaller chips....
Title: larger sensors
Post by: Steve Kerman on December 20, 2006, 11:08:25 pm
Quote
There is no reason to think that future chip fabrication equipment will be "upsized" to more easily or economically handle super-large chips, meaning anything bigger than about APS sensors, already larger than most micro-processors and such. On the contrary, the market for chip fabrication equipment is dominated by products other than DSLR sensors, in which the dominant trend is to smaller chip sizes, for the sake of faster operation, lower power consumption and lower cost.
DSLR sensors tend to use "trailing edge" fabrication technology, with larger feature sizes and larger chip sizes than most other applications, but the fabrication equipment does not last for ever, and so eventually fabrication will have to move to equipment designed for the generally smaller sizes of newer chips, potentially making super-large sensors even more difficult to produce.
[a href=\"index.php?act=findpost&pid=91669\"][{POST_SNAPBACK}][/a]
Huh?  Current medium format sensors are being fabricated on 150mm wafers.  That's really old technology.  Even 200mm wafers are rather dated.  Current technology is using 300mm wafers.

The trend has not been to smaller chip sizes.  It has been to denser chips.  But the biggest devices have been getting bigger for many years.  An 8086, as I recall, had around 30,000 transistors on it; the latest generation of parts have 100s of millions of transistors.  They do this by making them both denser and bigger.

It is certainly plausible that Dalsa and/or Kodak could produce larger sensors as they get hand-me-down 200mm and 300mm processing equipment.
Title: larger sensors
Post by: joern_kiel on December 21, 2006, 02:22:35 am
In 1994 i was presenting the new KODAK DCS 460 camera for Kodak at Photokina. Long time ago. At this time i started digital photography with a Nikon based KODAK DCS 200 with a filter wheel for three shot.

So i was in contact with some technicians of Kodak and i asked them for larger sensors, i.e. 4x5 inch. They told me that they have already produced 3 very large sensors for the U.S. Army for the use inside of spy sattelites. But they would be so expensive that no photographer could afford them in the next 20 years. I don´t know if that really was true but i believe that they exist today.

In 2014 i will ask Kodak again for the price of the chip ;-)

jørn
Title: larger sensors
Post by: eronald on December 21, 2006, 04:04:31 am
Quote
Huh?  Current medium format sensors are being fabricated on 150mm wafers.  That's really old technology.  Even 200mm wafers are rather dated.  Current technology is using 300mm wafers.

The trend has not been to smaller chip sizes.  It has been to denser chips.  But the biggest devices have been getting bigger for many years.  An 8086, as I recall, had around 30,000 transistors on it; the latest generation of parts have 100s of millions of transistors.  They do this by making them both denser and bigger.

It is certainly plausible that Dalsa and/or Kodak could produce larger sensors as they get hand-me-down 200mm and 300mm processing equipment.
[a href=\"index.php?act=findpost&pid=91706\"][{POST_SNAPBACK}][/a]

I think the CCD technology is basically getting older, but I would expect a change over to CMOS in medium format around the next generation. What this will do to image quality is anybody's guess.

Edmund
Title: larger sensors
Post by: BJL on December 21, 2006, 03:27:10 pm
Quote
Current medium format sensors are being fabricated on 150mm wafers.  That's really old technology.  Even 200mm wafers are rather dated.  Current technology is using 300mm wafers.

The trend has not been to smaller chip sizes.  It has been to denser chips.
[a href=\"index.php?act=findpost&pid=91706\"][{POST_SNAPBACK}][/a]

Quote
As someone trained in semiconductor design, albeit a long time ago, I strongly disagree with the contents of the quoted post. Historically, wafer sizes and chip sizes have steadily got bigger.[a href=\"index.php?act=findpost&pid=91674\"][{POST_SNAPBACK}][/a]

Firstly, wafer size has nothing to do with the chip size trend that I was talking about. Clearly it is easy it fit a far larger sensor than current ones onto a wafer, but a dominant limitation on size and cost is the reticle size of the steppers (or the newer fab. options, scanners or stepper-scanners): the maximum size of a chip that can be made in a normal single exposure process. A good hint is the sizes of the chips that are being made and envisioned, which surely influences the capabilities of future stepper/scanner designs.

What is the recent trend in chip sizes? Look for example at the size of the recent dominant microprocessors from Intel and AMD: the trend is to both denser and smaller with the shift to smaller feature sizes such as with 65nm process. Even with the move to two cores on a die, die sizes are smaller now than with single core processors of a few years ago. The new Intel Core Duo processors have a die area of 111-143 sqmm, about half the size of the previous year's dual core Pentium D 900 at 280 sqmm, and distinctly smaller than 183-230 sqmm for recent AMD Opteron dual core processors using the older 90 nm process. (By a sense of scale, 4/3" sensors have die areas of over 243 sqmm, and DX format sensors are over 430 sqmm.)

The biggest IC's that I know of are in the Intel Itanium series, and this year's dual core Itanium 2 processors using the slightly older 90 nm process are 27.72 mm x 21.5 mm = 596 sqmm (about 1D sensor sized). This is down from about 750 sqmm for the first dual core Itanium processors from 2003-2004, which used 130 nm process. Itanium processors are likely to move to the smaller 65 nm process, so even combined with a predicted move to quad core, will likely stay smaller than the dual core Itanium's of a few years ago, staying at about "1D" size. Canon also predicts a move to 45 nm process in three year's a time, further reducing sizes.

The trend seems to be increasing component count without increasing die size, and often decreasing die size.


To Edmund:  as a professional scientist, I judge claims more by evidence than by the speaker's educational credentials (so I have not tried to support my claims by reference to my Ph. D. and record of publications in physics journals), so could you provide evidence of a recent trend to larger reticle sizes in fab. equipment suitable for making sensors, which is what would be required to make larger sensors more likely in the future?
Title: larger sensors
Post by: eronald on December 21, 2006, 06:18:27 pm
BJL

Let's reason together.

A chip has to fit on a wafer so at a given technology there is always an absolute limit to chip size. However, wafer size is increasing with time, so wafer size is not an absolute intemporal limitation on chip size.

Then a chip neets to be printed - lithographed if you prefer. If this is done by mask exposure rather than direct write on wafer, there is an issue of reticle size, however, I believe chips like the Dalsa and Kodak designs are currently stitched by multiple exposures anyway, which is evidence that reticle size is not a permanent limitation on chip size.

This leaves the issue of local defects on the chip. Chip defects per infintesimal surface area at a given technology are proportional to this infinitesimal surface area . (p(ds is defect)=mu*ds) where mu is constant. Integrated, to reflect that *one* defect means a bad chip, this gives something like a Poisson yield statistic where you have a negative yield exponential going from 1 to 0 (asymptotically) as the chip surface increases.  Tiny chips are -almost- always good, huge chips -almost- always defect.  Luckily,there is empirical evidence that the constant mu has historically gone down with production epochs, ensuring that larger and larger chips can be made.

I suggest you now go and read up the above points via google. You will eventually hit upon the  Dalsa white papers, and various tutorials which document yield statistics with all the relevant maths, modelling and empirical data.

If  after doing the above research, you still feel a need to pour scorn on me you know where you can find me. Luckily, PhDs are awarded for life, and so I expect to be allowed to keep mine even after the abuse of wine, women and cameras has burnt out my last neurons.

Edmund

Quote
To Edmund:  as a professional scientist, I judge claims more by evidence than by the speaker's educational credentials (so I have not tried to support my claims by reference to my Ph. D. and record of publications in physics journals), so could you provide evidence of a recent trend to larger reticle sizes in fab. equipment suitable for making sensors, which is what would be required to make larger sensors more likely in the future?
[a href=\"index.php?act=findpost&pid=91806\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: josayeruk on December 22, 2006, 07:22:10 am
Quote
BJL

Let's reason together.

If  after doing the above research, you still feel a need to pour scorn on me you know where you can find me. Luckily, PhDs are awarded for life, and so I expect to be allowed to keep mine even after the abuse of wine, women and cameras has burnt out my last neurons.

Edmund
[a href=\"index.php?act=findpost&pid=91837\"][{POST_SNAPBACK}][/a]

Phd's and handbags at fifty paces!!!    

I've got a City and Guilds in E6 processing.    

Plus I would imagine that a megapixel count of a 56mm square sensor would be so bonkers that it would make it unusable with todays storage and computers???  Yes?

39 is enough for me thanks very much!

Jo S.x  

(Correct my maths if I am wrong but 70+ Mega Pixels?)
Title: larger sensors
Post by: Kumar on December 22, 2006, 09:14:48 am
Quote
Plus I would imagine that a megapixel count of a 56mm square sensor would be so bonkers that it would make it unusable with todays storage and computers???  Yes?
[a href=\"index.php?act=findpost&pid=91898\"][{POST_SNAPBACK}][/a]

Is it technically possible to have a larger sensor without increasing the pixel count? Would it impact picture quality? And of course, does it make marketing and financial sense?

Cheers,
Kumar
Title: larger sensors
Post by: josayeruk on December 22, 2006, 11:15:22 am
Quote
Is it technically possible to have a larger sensor without increasing the pixel count? Would it impact picture quality? And of course, does it make marketing and financial sense?

Cheers,
Kumar
[a href=\"index.php?act=findpost&pid=91917\"][{POST_SNAPBACK}][/a]

Yes, but then single shot quality would suffer if the pixel sites where enlarged again.

No easy answer!    

Jo S. x
Title: larger sensors
Post by: josayeruk on December 22, 2006, 11:16:34 am
Quote
PhDs are awarded for life, and so I expect to be allowed to keep mine even after the abuse of wine, women and cameras has burnt out my last neurons.

[a href=\"index.php?act=findpost&pid=91837\"][{POST_SNAPBACK}][/a]

A Phd is for life, not just for Christmas?  
Title: larger sensors
Post by: nik on December 22, 2006, 11:48:44 am
Without slagging anyone with a PhD, it's all very good talking about Intel and AMD fabrication and the trend they seem to be taking toward smaller size chips, but they are being designed with very different goals in mind, speed and energy efficiency. Very different to what the design goal of an imaging sensor is. If a larger sensor size will help solve the current limitations that Dalsa and Kodak face in future sensor design, then that's what they will do. No?

Hopefully they will at least reach the real 645 full frame size.

Go easy on flaming me if you must.

-Nik


Quote
BJL

Let's reason together.

A chip has to fit on a wafer so at a given technology there is always an absolute limit to chip size. However, wafer size is increasing with time, so wafer size is not an absolute intemporal limitation on chip size.

Then a chip neets to be printed - lithographed if you prefer. If this is done by mask exposure rather than direct write on wafer, there is an issue of reticle size, however, I believe chips like the Dalsa and Kodak designs are currently stitched by multiple exposures anyway, which is evidence that reticle size is not a permanent limitation on chip size.

This leaves the issue of local defects on the chip. Chip defects per infintesimal surface area at a given technology are proportional to this infinitesimal surface area . (p(ds is defect)=mu*ds) where mu is constant. Integrated, to reflect that *one* defect means a bad chip, this gives something like a Poisson yield statistic where you have a negative yield exponential going from 1 to 0 (asymptotically) as the chip surface increases.  Tiny chips are -almost- always good, huge chips -almost- always defect.  Luckily,there is empirical evidence that the constant mu has historically gone down with production epochs, ensuring that larger and larger chips can be made.

I suggest you now go and read up the above points via google. You will eventually hit upon the  Dalsa white papers, and various tutorials which document yield statistics with all the relevant maths, modelling and empirical data.

If  after doing the above research, you still feel a need to pour scorn on me you know where you can find me. Luckily, PhDs are awarded for life, and so I expect to be allowed to keep mine even after the abuse of wine, women and cameras has burnt out my last neurons.

Edmund
[a href=\"index.php?act=findpost&pid=91837\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: josayeruk on December 22, 2006, 04:49:22 pm
Quote
Without slagging anyone with a PhD, it's all very good talking about Intel and AMD fabrication and the trend they seem to be taking toward smaller size chips, but they are being designed with very different goals in mind, speed and energy efficiency. Very different to what the design goal of an imaging sensor is. If a larger sensor size will help solve the current limitations that Dalsa and Kodak face in future sensor design, then that's what they will do. No?

Hopefully they will at least reach the real 645 full frame size.

Go easy on flaming me if you must.

-Nik
[a href=\"index.php?act=findpost&pid=91946\"][{POST_SNAPBACK}][/a]

No offence intended and all Good points Nik.    

I think what you say about the 645 frame full size is more likely in he future.

Jo S. x
Title: larger sensors
Post by: free1000 on December 23, 2006, 03:29:49 am
I believe chips like the Dalsa and Kodak designs are currently stitched by multiple exposures anyway, which is evidence that reticle size is not a permanent limitation on chip size.

Yes... which is where I expect the dreaded centre fold comes from as the stitching process seems to result in non uniform halves to the chip...

maybe they will figure out how to do this properly in a year or two.
Title: larger sensors
Post by: Marsupilami on December 23, 2006, 06:35:22 am
I am very sceptical that a full frame chip for a mamiya Rz would make sense. I doubt that these old lens designs are able to cope with digital sensors well, specially when no crop is used.

Christian
Title: larger sensors
Post by: BJL on December 28, 2006, 04:08:14 pm
Quote
BJL

Let's reason together.

A chip has to fit on a wafer so at a given technology there is always an absolute limit to chip size. However, wafer size is increasing with time, so wafer size is not an absolute intemporal limitation on chip size.
Of course, but wafers are already so much bigger than any sensors that they are in no significant way a limit to sensor size, so that increasing wafer size will have little effect on the economics of making, say, a 56x42.5 (645) sensor.

Quote
... I believe chips like the Dalsa and Kodak designs are currently stitched by multiple exposures anyway, which is evidence that reticle size is not a permanent limitation on chip size.
I know about multiple exposures needed for sensors larger than reticle sizes: Canon also says that its 24x36mm chips are also made with multiple exposures. The point is that multiple exposures substantially increase sensor costs (or at least Canon says so) through lower yields, and I would expect that each extra exposure increases costs more, so reticle size also creates cost notches at "maximum double exposure size",  "maximum triple exposure size" and so on. (My guess is that the smaller 33x44mm "medium format" sensor size is the largest possible with just two exposures of some fab. equipment.)

Quote
This leaves the issue of local defects on the chip ...
I suggest you now go and read up the above points via google.[a href=\"index.php?act=findpost&pid=91837\"][{POST_SNAPBACK}][/a]
No need to be condescending: there is nothing in your post that I did not already know. Of course there will be some downward trend in the cost of sensors of any given size due to improving "mu" value, but that is a relatively modest factor in chip cost reduction, compared to the method that is vastly more popular in the digital imaging industry: reducing the size of the photo-sites (and thus of the sensors) needed to achieve a given level of image quality.

P. S. It is perhaps noteworthy that the team of Sony and Nikon has not even bothered to go to sensors at the maximum size possible with a single exposure, which is reportedly something around Canon 1D size (reticle size about 26x33mm according to Canon, so sensor active area a bit smaller.) Nikon is a major maker of IC fab. equipment, so there should not be a problem of access to suitable tools.
Title: larger sensors
Post by: eronald on December 28, 2006, 07:17:35 pm
BJL

I wasn't "condescending", I was seriously suggesting, and do so again, that you take a look at the yield curves - your note re. Sony underlines yet again that this is an issue of economics, not of technical fesability.  An apparently small increase in surface area can dramatically impact the yield and thus the economics of the process. The mu is built into the yield equations, but these are far from linear. Process shrink does not necessarily entail advanatges AFAIK because we are counting photons and thus well capacity is important ...

This discussion is going nowhere. Summary: I say sensors will get even bigger. You say no they won't. We sound like a 5 year olds

Edmund


Quote
No need to be condescending: there is nothing in your post that I did not already know. Of course there will be some downward trend in the cost of sensors of any given size due to improving "mu" value, but that is a relatively modest factor in chip cost reduction, compared to the method that is vastly more popular in the digital imaging industry: reducing the size of the photo-sites (and thus of the sensors) needed to achieve a given level of image quality.

P. S. It is perhaps noteworthy that the team of Sony and Nikon has not even bothered to go to sensors at the maximum size possible with a single exposure, which is reportedly something around Canon 1D size (reticle size about 26x33mm according to Canon, so sensor active area a bit smaller.) Nikon is a major maker of IC fab. equipment, so there should not be a problem of access to suitable tools.
[a href=\"index.php?act=findpost&pid=92673\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: Gigi on December 28, 2006, 09:10:14 pm
Quote
BJL


This discussion is going nowhere. Summary: I say sensors will get even bigger. You say no they won't. We sound like a 5 year olds

Edmund
[a href=\"index.php?act=findpost&pid=92694\"][{POST_SNAPBACK}][/a]

But cute ones at that
Title: larger sensors
Post by: Ray on December 28, 2006, 10:29:57 pm
Quote
This discussion is going nowhere. Summary: I say sensors will get even bigger. You say no they won't. We sound like a 5 year olds
[a href=\"index.php?act=findpost&pid=92694\"][{POST_SNAPBACK}][/a]

But 5 year olds with rather high IQs   .
Title: larger sensors
Post by: BJL on December 29, 2006, 05:14:26 pm
Quote
this is an issue of economics, not of technical fesability.
[a href=\"index.php?act=findpost&pid=92694\"][{POST_SNAPBACK}][/a]
I completely agree that it is matter of economics. (I am fairly sure that Sony could be making 24x36mm sensors and Nikon using them, if they saw an adequately profitable market for them.)

We disagree only on the tougher question of where the cost/benefit trade-offs will take us. My reading of recent trends is that the main efforts now and in the future are on improving performance at the various sensor sizes that have established themselves, up to but probably not beyond Kodak's current 37x49mm.

I can see some small chance of sensors going all the way to "645" (42.5x56mm), but almost none of going beyond that, since the medium format industry had mostly abandoned formats larger than 645 already in the late days of the film era.
Title: larger sensors
Post by: Ray on December 29, 2006, 08:17:40 pm
Quote
I can see some small chance of sensors going all the way to "645" (42.5x56mm), but almost none of going beyond that, since the medium format industry had mostly abandoned formats larger than 645 already in the late days of the film era.
[a href=\"index.php?act=findpost&pid=92820\"][{POST_SNAPBACK}][/a]

There does seem likely to be an optimal middle ground where a compromise between maximum, ultimate quality and the practicality of convenience and cost will stabilise.

On the other hand, this balance can be tipped either way with new possibilities resulting from technological innovation. It wasn't long ago was it, that most digital backs on MF cameras had to be tethered to a computer. The fact that this is no longer a requirement must have given a huge boost to the popularity of digital backs for the cropped MF format.

But MF still has DoF disadvantages, as witnessed in some of Bernard Languillier's recent images from Japan using his ZD. As you continue to increase the size of the sensor, the DoF limitations become greater, shift movements become necessary, which don't always have the desired effect, and the whole system becomes cumbersome and heavy and of course very expensive.

For such disadvantages, one wants a substantial increase in image quality (at least I do   ).

BJL seems to have settled on the Olympus 4/3rds format as being ideal for his purposes, producing sufficient quality at the print sizes he's interested in. I'll probably settle on full frame 35mm as being adequate for my purposes. Canon have managed to reduce the gap between pixels on their new 400D, so that the photodiodes are not smaller than those on the 30D, resulting apparently in no increase in noise compared with the 30D. A full frame 35mm sensor with the same pixel density as the 400D would be something like a 26 or 27mp sensor. I think that should be sufficient for me, but I'm not a professional photographer striving to get an edge in the market place, or produce the sharpest billboards ever   .
Title: larger sensors
Post by: Ray on December 29, 2006, 10:02:41 pm
Well, I just nipped over to the dpreview site to check my facts on the Canon 400D and of course didn't find any direct comparison between that camera and the 30D; different price category.

However, checking the two separate reviews (of the 30D and 400D), it seems that dynamic range for both cameras is the same, ie. 8.4EV up to and including ISO 800.

At ISO 1600, the 30D seems to have a slight advantage, dropping to 8EV as opposed to the 400D's 7.8EV.

I suppose John Sheehy would disagree with these figures.  
Title: larger sensors
Post by: John Camp on December 29, 2006, 10:28:44 pm
I don't have a PhD in physics, but I do have a degree in history, and I know that larger, "higher resolution," cameras were once ditched by almost everybody (in the 1940s and 50s) for 35mm machines because the smaller ones were faster, cheaper, lighter and "good enough."

After it was established, 35 held on because it had a huge base of both users and equipment, to the point that some digital shooters now demand "full frame" as though the 35mm frame size were annointed by God as the only right one; that's the power of an installed base.

The point being that much larger sensors may indeed be possible, but who will take the risk of creating a new camera system around them -- especially when, in terms of quality, you could get in any high-end magazine with nothing more than a Canon, Nikon or Leica? Big sensors (after a point) mean bigger lenses, bigger files, more weight, more processing power...for what? So your photograph will look better in People, which is printed on toilet paper? They'd be like that huge (24-inch perhaps?) Polaroid camera that used to tour around the US; or maybe there were two of them. Sure, you could do it, but why?

The above note, by the way, applies to "life time" developments. I don't doubt that a hundred and fifty years from now, things will be different -- but I wouldn't be surprised if the technology 50 years from now is more or less a refinement of what we are already using...say the difference between 40s film and 90s film.

JC
Title: larger sensors
Post by: Ray on December 30, 2006, 05:55:49 pm
Quote
After it was established, 35 held on because it had a huge base of both users and equipment, to the point that some digital shooters now demand "full frame" as though the 35mm frame size were annointed by God as the only right one; that's the power of an installed base.
[a href=\"index.php?act=findpost&pid=92853\"][{POST_SNAPBACK}][/a]

John,
I understand what you are getting at, but I would suggest that the reasons why the demand for the 35mm frame size has held on are similar to the reasons why the format was created in the first instance. The film already had a huge base of users and equipment in the movie industry.

It was a brilliant idea at the time but had one serious drawback for both professionals and amateur enthusiasts, the image quality was not up to scratch for prints larger (or much larger) than 8x10". Nevertheless, because millions of happy shooters seemed quite satisfied with (or at least familiar with) the image quality of 35mm film, it was easy to introduce a cropped digital format that could compete quality-wise on an 8x10(12)" print and even slightly larger.

The lure of full frame 35mm is due to the fact that, with the existing 35mm lens base, everyone can enjoy the convenience, automation and portability they remember from the 35mm film days, but also get that increased image quality that used to be only available with expensive, more cumbersome and less automated MF systems.

Now, I know you can claim that cropped format cameras such as the 12mp D2X produce virtually the same image quality as the 5D, but they are pushing it. Ultimately, a FF 35mm sensor can hold more than twice the number of pixels of a D2X size sensor and over 4x the number of pixels of an Olympus 4/3rds sensor.

To put it another way, when the pixel count race has ended, you'll be able to make a print from FF 35mm that has more than twice the area of a print from a D2X size sensor, whislt keeping the same apparent sharpness from the same viewing distance, assuming further improvement in the quality of 35mm lenses.
Title: larger sensors
Post by: John Sheehy on December 30, 2006, 06:07:29 pm
Quote
Well, I just nipped over to the dpreview site to check my facts on the Canon 400D and of course didn't find any direct comparison between that camera and the 30D; different price category.

However, checking the two separate reviews (of the 30D and 400D), it seems that dynamic range for both cameras is the same, ie. 8.4EV up to and including ISO 800.

At ISO 1600, the 30D seems to have a slight advantage, dropping to 8EV as opposed to the 400D's 7.8EV.

I suppose John Sheehy would disagree with these figures. 
[a href=\"index.php?act=findpost&pid=92850\"][{POST_SNAPBACK}][/a]

Firstly, dynamic range is not a monolithic concept.  There are different standards for dynamic range.  The one I concern myself with mainly is the one that compares maximum RAW signal to the black frame noise floor.  IOW, how far the 1:1 SNR is below the clipping point.  For the 30D, saturation is 3943 ADU above black, and the black noise is about 4.7 ADU.  For the XTi/400D, the figures are 3800 and 7.25.  3943/4.7 = 839; 3800/7.25 = 524.  839:1 is 9.71 stops, and 524:1 is 9.03 stops; a difference of 0.68 stops.

What Phil seems to be measuring is how single RAW conversions from the two cameras compare, but you don't necessarily get the full DR with that approach; the very approach tends to equalize the results.  The standard conversion of the 400D is to set whitepoint 1/2 stop lower than the 30D, so with defaults, the 400D will get more highlights clipped by a converter.
Title: larger sensors
Post by: bjanes on December 30, 2006, 10:35:30 pm
Quote
Firstly, dynamic range is not a monolithic concept.  There are different standards for dynamic range.  The one I concern myself with mainly is the one that compares maximum RAW signal to the black frame noise floor.  IOW, how far the 1:1 SNR is below the clipping point.  For the 30D, saturation is 3943 ADU above black, and the black noise is about 4.7 ADU.  For the XTi/400D, the figures are 3800 and 7.25.  3943/4.7 = 839; 3800/7.25 = 524.  839:1 is 9.71 stops, and 524:1 is 9.03 stops; a difference of 0.68 stops.

What Phil seems to be measuring is how single RAW conversions from the two cameras compare, but you don't necessarily get the full DR with that approach; the very approach tends to equalize the results.  The standard conversion of the 400D is to set whitepoint 1/2 stop lower than the 30D, so with defaults, the 400D will get more highlights clipped by a converter.
[a href=\"index.php?act=findpost&pid=92955\"][{POST_SNAPBACK}][/a]

Electronics engineers define dynamic range as the ratio of full well capacity to read noise, both expressed in electrons. Since the sensor response is linear, and the ADU number is proportional to electons, this corresponds to what John is measuring.

Phil determines DR by photographing a Stouffer (or similar step wedge), apparently exposed for a mid gray tone. Apparently, he uses classical mid-tone based exposure, rather than exposing to the right. The quote below is taken from his D40 review:

"Shadow range is more complicated, in our test we stop measuring values below middle gray as soon as the luminance value drops below our defined 'black point' (about 2% luminance) or the signal-to-noise ratio drops below a predefined value (where shadow detail would be swamped by noise), whichever comes first."

He uses either the in camera JPEG conversion or ACR conversion from a raw file rather than examining the raw values directly. In this situation, the black point would be affected by the tone curve in the first instance and whatever S/N cutoff is chosen for the point where shadow noise is swamped by noise in the second case. The 2% luminance value does not make sense to me since 100%/2% = 50:1 = 5.64 stops. I think Phil needs to refine his testing criteria. Perhaps John can comment.

Bill
Title: larger sensors
Post by: John Sheehy on December 30, 2006, 10:45:26 pm
Quote
I think Phil needs to refine his testing criteria. Perhaps John can comment.
[a href=\"index.php?act=findpost&pid=92978\"][{POST_SNAPBACK}][/a]

I agree.

The "whichever comes first" part is the one that bothers me most, and of course, his methodology relies too heavily on a RAW converter to maintain highlights.  The 400D has 1/2 stop more than the older Canons, relative to average grey metering.
Title: larger sensors
Post by: bjanes on December 30, 2006, 11:00:09 pm
Quote
Now, I know you can claim that cropped format cameras such as the 12mp D2X produce virtually the same image quality as the 5D, but they are pushing it. Ultimately, a FF 35mm sensor can hold more than twice the number of pixels of a D2X size sensor and over 4x the number of pixels of an Olympus 4/3rds sensor.

To put it another way, when the pixel count race has ended, you'll be able to make a print from FF 35mm that has more than twice the area of a print from a D2X size sensor, whislt keeping the same apparent sharpness from the same viewing distance, assuming further improvement in the quality of 35mm lenses.
[a href=\"index.php?act=findpost&pid=92951\"][{POST_SNAPBACK}][/a]

12 MP is 12 MP in terms of resolution whether one is using a full 35 mm frame or a cropped frame in the D2X. As Michael explained earlier, you would really have to double the pixel count to 24 MP to see a real difference in resolution. At base ISO the D2X does well, but at high ISO it has a distinct disadvantage to the 5D in terms of noise and dynamic range.

If you had a 24 MP camera, you would probably need to use a tripod to capture the full resolution of the camera, although image stabilized lenses can extend the limits of hand holding. The necessity to use a tripod obviates much of the appeal of 35 mm photography, at least to many amateurs.

Bill
Title: larger sensors
Post by: John Camp on December 31, 2006, 01:51:06 am
Quote
John,
I understand what you are getting at, but I would suggest that the reasons why the demand for the 35mm frame size has held on are similar to the reasons why the format was created in the first instance. [a href=\"index.php?act=findpost&pid=92951\"][{POST_SNAPBACK}][/a]

Ray, I don't disagree. I was making the minor point that many people regard the legacy 35mm size as somehow perfect, when, in fact, I find it somewhat awkwardly shaped, as do other follks. I would be happier if Nikon, say, got to 22mp with something more on the lines of a 4x5 or 6x7 aspect ratio, rather than 3x2, if that can be done using legacy gear (lenses) That is, I don't think there is anything holy about the **particular** size and shape of the 35mm frame.

My overall point was to argue that we are now getting to the place where practicality will begin to rule; that is, 98 percent of the professional/serious amateur market will be satisfied with the next generation of DSLRs (~22 mp) essentially forever, and will see no real benefit in upgrading until a camera no longer functions. We will be in a market like the film market in 1995. Another 1.9 percent will want somewhat larger MF chips, but those, too, are reaching the point where the cost/benefit equation will slow development of ever-larger chips.

I don't believe there is any technical reason that we couldn't have a 4x5 inch chip...and it might be bought by several people per year. How many companies really want a market comprising several people?  

JC
Title: larger sensors
Post by: ErikKaffehr on December 31, 2006, 04:19:14 am
Hi!

It's not only about chips and sizes. It's also about lenses. For denser chips we need better lenses. There must be a limit set by lenses, go beyond that and there will be diminishing returns from increasing pixel size.

As things stand now we have essentially around four formats.

1) 4/3
2) APS-C
3) 135 Full frame
4) 645 reduced frame

2-4 use lenses designed for film cameras. As far as I understand 135 full frame
(as practised by Canon) have some serious issues with wide angle lenses. This problem is aggravated by the preference for zooms in 135 photography.

On medium format it is a bit easier. Sensors are still smaller than nominal format, so the outermost part of the image circle is not used. Also MF photographers don't seem to demand the extensive zooming capability normally accessible to the 135 folks.

There are many other things affecting image quality than noise and pixel size.

a) Depth of field
 Precision of focusing
c) Camera shake
d) Vibration introduced by mirror
e) Vibration introduced by shutter
f) Precision of camera assembly, including heat expansion
g) Diffraction effects
h) Quality of optics

We need to control all of the above to achieve optimal image quality.

It may be that we need to kinds of camera systems:

For action: Big pixels, so we can use high ISO

For tripod based photography: Dense pixels, we can use low ISO so noise is not that much an issue

Best regards

Erik

 
Quote
Ray, I don't disagree. I was making the minor point that many people regard the legacy 35mm size as somehow perfect, when, in fact, I find it somewhat awkwardly shaped, as do other follks. I would be happier if Nikon, say, got to 22mp with something more on the lines of a 4x5 or 6x7 aspect ratio, rather than 3x2, if that can be done using legacy gear (lenses) That is, I don't think there is anything holy about the **particular** size and shape of the 35mm frame.

My overall point was to argue that we are now getting to the place where practicality will begin to rule; that is, 98 percent of the professional/serious amateur market will be satisfied with the next generation of DSLRs (~22 mp) essentially forever, and will see no real benefit in upgrading until a camera no longer functions. We will be in a market like the film market in 1995. Another 1.9 percent will want somewhat larger MF chips, but those, too, are reaching the point where the cost/benefit equation will slow development of ever-larger chips.

I don't believe there is any technical reason that we couldn't have a 4x5 inch chip...and it might be bought by several people per year. How many companies really want a market comprising several people?   

JC
[a href=\"index.php?act=findpost&pid=92999\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: eronald on December 31, 2006, 06:09:53 am
For those interested, here is a  Translation Key to the jargon (http://www.astropix.com/HTML/I_ASTROP/HOW.HTM) in some of the above postings.

Now to mix it some more:
I once shot a D200 to 1Ds2 comparison, photographing a guy in a Paris café, across the room. Holding the cameras with elbows braced on a table. In this low light the shake and noise and focus problems cumulatively *wrecked* the D200 shots while the 1ds2 was still running strong. I mean wrecked, not just deteriorated. Outdoors, little difference would have been visible.

Moral of the story, I would like the following added to the sophisticated discussion of ADUs and noise:

Camera shake *in prints* depend on the enlargement factor.
Noise *in prints* depends on the enlargement factor.

and then ...

Focus Aliasing depends on the resolution of the autofocus *actuators*, as much as on the AF sensor system. A cropped format with a legacy AF lens implies larger atomic focus steps.

Edmund
Title: larger sensors
Post by: david o on December 31, 2006, 06:41:49 am
Quote
Ray, I don't disagree. I was making the minor point that many people regard the legacy 35mm size as somehow perfect, when, in fact, I find it somewhat awkwardly shaped, as do other follks. I would be happier if Nikon, say, got to 22mp with something more on the lines of a 4x5 or 6x7 aspect ratio, rather than 3x2, if that can be done using legacy gear (lenses) That is, I don't think there is anything holy about the **particular** size and shape of the 35mm frame.

My overall point was to argue that we are now getting to the place where practicality will begin to rule; that is, 98 percent of the professional/serious amateur market will be satisfied with the next generation of DSLRs (~22 mp) essentially forever, and will see no real benefit in upgrading until a camera no longer functions. We will be in a market like the film market in 1995. Another 1.9 percent will want somewhat larger MF chips, but those, too, are reaching the point where the cost/benefit equation will slow development of ever-larger chips.

I don't believe there is any technical reason that we couldn't have a 4x5 inch chip...and it might be bought by several people per year. How many companies really want a market comprising several people?   

JC
[a href=\"index.php?act=findpost&pid=92999\"][{POST_SNAPBACK}][/a]

100% agree on that, as I never really liked the 35mm proportion.
6x7 to me offers the bests proportions, don't get me wrong, it could be smaller, but same ratio.
Title: larger sensors
Post by: sjprg on December 31, 2006, 08:50:00 am
I came across an article about a 100MP Dalsa sensor for either the military or NASA. awhile back, so I suspect that larger sensors are coming when the economics can be justified.
Here is the Dalsa page. I'm not sure if the article is in here or not. Maybe I saw it on a NASA site.

http://www.dalsa.com/news/news.asp?itemID=165 (http://www.dalsa.com/news/news.asp?itemID=165)
Title: larger sensors
Post by: bjanes on December 31, 2006, 12:07:37 pm
Quote
For those interested, here is a  Translation Key to the jargon (http://www.astropix.com/HTML/I_ASTROP/HOW.HTM) in some of the above postings.

Edmund
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93007\")

Edmund,

The link to Jerry Lodriguss's paper is very useful, but he does perpetuate one misconception that has confused me in the past and may confuse some forum members as well. He states that since human vision is logarithmic, it is also necessary that the digital file also should be log so as to comply with the nature of human vision.

Actually, the gamma correction is applied to the image for coding efficiency. In the resulting log transformation, a proportional change in intensity (say by a factor of 1.01 implied by the Weber-Fechner law) results in a proportional increment in the recorded number. In a linear scale, there is a much greater proportional increment between pixel values going from 1 to 2 than from 254 to 255. The gamma function is a power equation: y = x^(1/2.2). In a linear file, the brightest f/stop contains half the data, which is not efficient for coding the shadows.

When the image is displayed on the monitor,  an inverse function is applied so that the resulting image brightness on the monitor corresponds more less 1:1 to the values in the scene. (y = x^2.2). In order to display the greater dynamic of most scenes on a monitor with less DR, a tone curve is generally applied to the data in order to compress the shadows and highlights.

This is explained in [a href=\"http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html#desktop] Charles Poynton's[/url] Gamma FAQ.

According to the Rec 709 transfer function there is a linear segment towards the shadows, so that the actual power is 2.5.
Title: larger sensors
Post by: Ray on December 31, 2006, 07:38:18 pm
Quote
12 MP is 12 MP in terms of resolution whether one is using a full 35 mm frame or a cropped frame in the D2X. As Michael explained earlier, you would really have to double the pixel count to 24 MP to see a real difference in resolution.[a href=\"index.php?act=findpost&pid=92980\"][{POST_SNAPBACK}][/a]

Bt that's exactly what I've implied, Bill. At the end of the day, when the pixel count race is over and the minimum size photodiode for useful quality has been reached (and I believe Michael R has said this as around 5 microns), a FF 35mm sensor will hold more than double the number of pixels of a D2X size sensor and over 4x the number of pixels of an Olympus 4/3rds sensor.
Title: larger sensors
Post by: Ray on December 31, 2006, 08:22:34 pm
Quote
Ray, I don't disagree. I was making the minor point that many people regard the legacy 35mm size as somehow perfect, when, in fact, I find it somewhat awkwardly shaped, as do other follks. I would be happier if Nikon, say, got to 22mp with something more on the lines of a 4x5 or 6x7 aspect ratio, rather than 3x2, if that can be done using legacy gear (lenses) That is, I don't think there is anything holy about the **particular** size and shape of the 35mm frame.

[a href=\"index.php?act=findpost&pid=92999\"][{POST_SNAPBACK}][/a]

John,
This is a complete red herring. Aspect ratios vary to suit the composition. They can vary from square to 6x17cm. If you are talking about aspect ratios that are in general more suitable for, say portraits, then 4/3rds or 6x7 would probably be more appropriate than the 35mm 3:2.

If we are talking about a general purpose aspect ratio for all types of subjects and compositions, then the arguments in favour of 35mm are at least as strong as the arguments in favour of any other aspect ratio.

Consider the effect a square aspect ratio would have using your widest 35mm lens. It simply wouldn't be as wide in either the horizantal or vertical plane.

The only practical solution to the subjective preferences for different aspect ratios is a circular sensor which matches the image circle of the lens as close as possible without significant peripheral light fall-off. The end users, through use of zoom lenses, could then create any aspect ratio they thought appropriate for the composition, without feeling they were compromising either maximum field of view or maximum image quality.
Title: larger sensors
Post by: John Camp on December 31, 2006, 09:17:43 pm
Quote
If we are talking about a general purpose aspect ratio for all types of subjects and compositions, then the arguments in favour of 35mm are at least as strong as the arguments in favour of any other aspect ratio.
[a href=\"index.php?act=findpost&pid=93081\"][{POST_SNAPBACK}][/a]

This was what I was talking about. I don't think there is any escape in the near future from legacy lenses built for 35mm film. Given the image circle projected by those lenses, would there be other workable aspect ratios other than 2:3? I personally would prefer a "more square" ratio, like 3:4, or even 4:5, but not square. I'm not sure, however, if that is even possible given, say, Nikon 35mm legacy lenses -- as far as I know, the internal workings may be geared especially for the 35mm film aspect ratio, at least for larger sizes of sensor. I also think that if you surveyed most working pros, and actually gave them a chance to think about it, you'd find that if you could get 22mp in either a 2:3 or 3:4, that most would opt for the latter. But that's just what I think.

JC
Title: larger sensors
Post by: Ray on December 31, 2006, 09:47:47 pm
Quote
I also think that if you surveyed most working pros, and actually gave them a chance to think about it, you'd find that if you could get 22mp in either a 2:3 or 3:4, that most would opt for the latter. But that's just what I think.
[a href=\"index.php?act=findpost&pid=93084\"][{POST_SNAPBACK}][/a]

That might be true and the evidence to support that is the prevalence of aspect ratios varying from square to 6x8cm in MF film cameras, with 6x9cm being less common. There's something to be said for the fact that a square gives you a greater capture area than any rectangle with the same diagonal.

On the other hand, pros tend to use (and can generally afford) the best tools for the job. If the client wants a high resolution panorama shot, the pro is (was) likely to use a 6x17cm panorama camera, In the absence of such wide aspect ratios in digital cameras, the best option might be to use, for all purposes, the camera with the biggest and highest resolving sensor, and crop to taste.
Title: larger sensors
Post by: bjanes on December 31, 2006, 10:03:16 pm
Quote
Bt that's exactly what I've implied, Bill. At the end of the day, when the pixel count race is over and the minimum size photodiode for useful quality has been reached (and I believe Michael R has said this as around 5 microns), a FF 35mm sensor will hold more than double the number of pixels of a D2X size sensor and over 4x the number of pixels of an Olympus 4/3rds sensor.
[a href=\"index.php?act=findpost&pid=93075\"][{POST_SNAPBACK}][/a]

Yes, a 24 MP 35 mm full frame sensor could give excellent image quality at base ISO, but my argument was that 24 MP is more than can be made use of in hand held photography and many users would favor better high ISO performance and DR over the extra pixels. From what I understand, the Canon 1D M2 outsells the 1DsM2 by a large margin and most users of this type of camera are not cost constrained.

Bill
Title: larger sensors
Post by: Ray on January 01, 2007, 04:31:15 am
Quote
..but my argument was that 24 MP is more than can be made use of in hand held photography and many users would favor better high ISO performance and DR over the extra pixels. [a href=\"index.php?act=findpost&pid=93089\"][{POST_SNAPBACK}][/a]


I can't see it, Bill. It's certainly true that the bigger the enlargement, the faster the shutter speed required for a sharp print, but how much faster is an interesting question.

Supposing we start off from the 1/FL rule for an 8x12" print from 35mm. Let's say we're rather critical and demand 1/2FL. Let's assume also that from a purely resolution viewpoint, the 12mp 5D is equal to the best that 35mm film can produce.

A 24mp FF 35mm sensor will have 1.4x the resolution of the 5D. Does the rule then become 1/(2FL*1.4), ie. 1/2.8FL? Does that seem reasonable?

Consider a hand-held shot using a 100mm lens on a 24mp 35mm sensor. Without interpolation, at 360ppi, the 72mb file should produce a print approx. 11.5x17.25".

Our new rule gives us a shutter speed of 1/280th sec for good hand-held sharpness.

Let's suppose we are supercritical and demand nothing less than 1/3FL for a sharp 8x12" print and 1/(3FL*1.4) for an 11.5x17.25" print, because we're using the higher resolution of 360 ppi which is close to the limits of human perception. That gives us a shutter speed of 1/400th approx.

Let's further assume that the claimed 4 stop advantage of the latest Canon lenses with IS, such as the 70-200/4 IS, is a load of baloney, but 2 stops is very credible.

Our 1/400th then becomes 1/100th or back to 1/FL which seems to me a very usable shutter speed at ISO 100, and no problem at all at ISO 200-800.

Let's not create objections just for the sake of it   .
Title: larger sensors
Post by: John Sheehy on January 01, 2007, 09:07:15 am
Quote
I can't see it, Bill. It's certainly true that the bigger the enlargement, the faster the shutter speed required for a sharp print, but how much faster is an interesting question.[a href=\"index.php?act=findpost&pid=93103\"][{POST_SNAPBACK}][/a]

One thing to consider is that any camera motion blur will be "thinner" with higher MP counts.  Suppose you were shooting a star; a point of light.  The 6MP camera moves by 3 pixels during the exposure; if it were 24MP, it would move 5 or 6 pixels during the exposure.  The 24MP case would result in a streak of affected pixels the same percentage of the image dimensions, possibly slightly shorter, but it would also be thinner, affecting a much smaller percentage of the pixels in the image.  This might make it easier to ignore, psychologically, or make it easier for a deconvolution to correct it (especially if the blur is straight, which is likely in subject blur, as well, such as an unsuccessful pan).
Title: larger sensors
Post by: John Sheehy on January 01, 2007, 09:16:48 am
Quote
Let's not create objections just for the sake of it   .
[a href=\"index.php?act=findpost&pid=93103\"][{POST_SNAPBACK}][/a]

IMO, time will tell that the idea that more and smaller pixels filling the same sensor space causes more image noise will join the Flat Earth in the Mythological Hall of Infamy.  More and smaller pixels only increases the noise of individual pixels, against their neighbors.  When each pixel and its neighbors become less significant to the big picture, their contribution to image noise diminishes.  When the pixels outresolve the lens, there is no need for antialiasing filters, there are no demosaicing artifacts, and you can resample to any smaller size without artifacts.  You can toss your bokeh-killing TCs into the trash, unless you want them as an optical viewfinder aid.
Title: larger sensors
Post by: Jonathan Wienke on January 01, 2007, 10:02:41 am
Quote
Consider the effect a square aspect ratio would have using your widest 35mm lens. It simply wouldn't be as wide in either the horizantal or vertical plane.

Ray, you're making the unfounded assumption that making a square sensor is done by chopping off the ends of a rectangular sensor,  i.e. making a 24x24mm sensor out of a 24x36mm sensor. If you made a 36x36mm sensor and put it behind the lens, you'd get just as wide of coverage as you would with the 24x36mm sensor, and have the luxury of cropping to either a vertical or horizontal format in post from one RAW, or not cropping at all.
Title: larger sensors
Post by: BernardLanguillier on January 01, 2007, 10:10:36 am
Quote
Ray, you're making the unfounded assumption that making a square sensor is done by chopping off the ends of a rectangular sensor,  i.e. making a 24x24mm sensor out of a 24x36mm sensor. If you made a 36x36mm sensor and put it behind the lens, you'd get just as wide of coverage as you would with the 24x36mm sensor, and have the luxury of cropping to either a vertical or horizontal format in post from one RAW, or not cropping at all.
[a href=\"index.php?act=findpost&pid=93131\"][{POST_SNAPBACK}][/a]

Yes, but a 36x36 mm sensor requires an image circle slightly larger than that of a 24x36 mm, since the diagonal is longer.

Regards,
Bernard
Title: larger sensors
Post by: Jonathan Wienke on January 01, 2007, 10:11:35 am
Quote
One thing to consider is that any camera motion blur will be "thinner" with higher MP counts.  Suppose you were shooting a star; a point of light.  The 6MP camera moves by 3 pixels during the exposure; if it were 24MP, it would move 5 or 6 pixels during the exposure.  The 24MP case would result in a streak of affected pixels the same percentage of the image dimensions, possibly slightly shorter, but it would also be thinner, affecting a much smaller percentage of the pixels in the image.

Whoa, there, buddy, you need your morning coffee or something. If you're keeping composition constant (which you're assuming, given your statement about the blur trail being longer), then the width of the blur trail is going to increase by the the exact same degree as the length, which is in direct proportion to the increase in sensor pixels. All you're doing is recording the same blur with more pixels, which tends to limit the effectiveness of putting more pixels on the subject.

Upping the MP count for a given composition makes motion blur and camera shake more noticeable and visually objectionable, not less, at least when viewing at 100% in PS.
Title: larger sensors
Post by: bjanes on January 01, 2007, 10:33:38 am
Quote
Supposing we start off from the 1/FL rule for an 8x12" print from 35mm. Let's say we're rather critical and demand 1/2FL. Let's assume also that from a purely resolution viewpoint, the 12mp 5D is equal to the best that 35mm film can produce.

Let's not create objections just for the sake of it   .
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93103\")

According to the Leica expert [a href=\"http://www.imx.nl/photosite/leica/technics/faq.html#Anchor-What-47857]Erwin Puts[/url] your rule of thumb has never been verified. Erwin states that a higher shutter speed is required for best results. And we are talking about an acceptably sharp image, not maximal resolution at 24 MP.

What we really need here are resolution figures measured in the field from your 24 MP camera with hand held shots from several expert photographers. Unfortunately, I do not believe that such data are available. Why does Michael use a tripod with his P45 back?

Bill
Title: larger sensors
Post by: Jonathan Wienke on January 01, 2007, 10:46:24 am
Quote
Yes, but a 36x36 mm sensor requires an image circle slightly larger than that of a 24x36 mm, since the diagonal is longer.

If you shoot with a 36x36mm sensor and crop to 2:3, the net outcome is exactly the same as shooting with a 24x36mm sensor. You're working with an image circle of 43.27mm either way.

If you crop 36x36mm to 4:5, your effective sensor size is 28.8x36mm, and your image circle is 46.10mm. The real-world penalty for the use of the extra 1.415mm of image circle outside the lens' design will vary from lens to lens, but isn't generally going to be catastrophic.

If you want a square composition, simply leave a enough room around the edges when composing to allow cropping away corner ugliness. Using the center 30.59mm of the sensor when shooting a square composition will use the same 43.27mm image circle as a 24x36mm sensor.
Title: larger sensors
Post by: John Sheehy on January 01, 2007, 11:38:25 am
Quote
Whoa, there, buddy, you need your morning coffee or something.

I was about 1/2 way through cup #1 when I posted that.

Quote
If you're keeping composition constant (which you're assuming, given your statement about the blur trail being longer), then the width of the blur trail is going to increase by the the exact same degree as the length, which is in direct proportion to the increase in sensor pixels. All you're doing is recording the same blur with more pixels, which tends to limit the effectiveness of putting more pixels on the subject.

3 pictures are worth 3000 words:

Motion blur and megapixels (http://jjd.pbase.com/jps_photo/image/72423525/original)

Quote
Upping the MP count for a given composition makes motion blur and camera shake more noticeable and visually objectionable, not less, at least when viewing at 100% in PS.
[a href=\"index.php?act=findpost&pid=93134\"][{POST_SNAPBACK}][/a]

Of course at 100% the higher MP will look worse in every way.  I don't see the relevance of this to the entire image.  Pixels are only relevant insomuch as they make their proportional contribution to the image.
Title: larger sensors
Post by: BJL on January 01, 2007, 12:41:10 pm
Quote
I once shot a D200 to 1Ds2 comparison, photographing a guy in a Paris café, across the room. Holding the cameras with elbows braced on a table. In this low light the shake and noise and focus problems cumulatively *wrecked* the D200 shots while the 1ds2 was still running strong.
[a href=\"index.php?act=findpost&pid=93007\"][{POST_SNAPBACK}][/a]
Edmund, I have a basic scientific question about your experiment: what lenses, aperture sizes, shutter speeds, focal lengths and ISO speeds did you use? If you used a larger aperture size with the 1Ds2 (as would be the case if you used equal aperture ratio and lenses covering the same FOV), all you are showing is the well known speed advantage of larger aperture sizes in low light situations. There is also of course the issue of comparing camera and sensors using different technologies and of very different costs. As retired(?) engineer and forum participant Howard Smith has often said, comparisons are best done with only one parameter varied, not many.
Title: larger sensors
Post by: Jonathan Wienke on January 01, 2007, 01:53:57 pm
Quote
I was about 1/2 way through cup #1 when I posted that.
3 pictures are worth 3000 words:

Motion blur and megapixels (http://jjd.pbase.com/jps_photo/image/72423525/original)
Of course at 100% the higher MP will look worse in every way.  I don't see the relevance of this to the entire image.  Pixels are only relevant insomuch as they make their proportional contribution to the image.

If you resample everything to the same pixel dimensions, or print unequal MP images to the same print size, the effect of MP on motion blur is irrelevant, as long as the motion blur has a greater negative effect on resolution than pixel count, i.e. motion blur is at least 1 pixel. Your sample images prove my point.

More megapixels don't make motion blur "thinner" or less noticeable; at best, they make no difference.
Title: larger sensors
Post by: bjanes on January 01, 2007, 03:52:22 pm
Quote
Whoa, there, buddy, you need your morning coffee or something. If you're keeping composition constant (which you're assuming, given your statement about the blur trail being longer), then the width of the blur trail is going to increase by the the exact same degree as the length, which is in direct proportion to the increase in sensor pixels. All you're doing is recording the same blur with more pixels, which tends to limit the effectiveness of putting more pixels on the subject.

Upping the MP count for a given composition makes motion blur and camera shake more noticeable and visually objectionable, not less, at least when viewing at 100% in PS.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93134\")


After a  brief google search, I was able to locate a scientific paper relating camera motion and effective spatial resolution: [a href=\"http://scien.stanford.edu/jfsite/Papers/ImageCapture/ICIS06_CameraShake.pdf]Stanford University[/url]

After studying the article, please refer to Figure 5 for a computer generated simulation of the 50% MTF in cycles/mm for sensors with the same die size but different pixel sizes. The analysis is for monochromatic light and a diffraction limited lens at f/2.8 and a SNR of 30dB.

In low light situations, the small pixel camera does worse, since it requires a longer exposure and has more camera shake than a camera with larger pixels. Under outdoor conditions, the MTF for the 7.4 um pixels approaches  100 cy/mm asymptotically, and the 3.5 um pixel size approaches 120 cy/mm. The resolution does not double as expected because of camera shake. The best that 1.7 um pixels can do is 200 cy/mm.

The paper also confirms that the rule of thumb of the reciprocal of focal length in millimeters as a guide for the exposure required for a sharp picture is a very rough approximation at best. There have been very few studies of hand held camera shake.

If anyone has additional data, please post.

Bill
Title: larger sensors
Post by: eronald on January 01, 2007, 04:22:48 pm
John,

Or as a focus aid -
My 1Ds2 with the Canon 85 is more focus-limited than resolution limited.

Edmund

Quote
You can toss your bokeh-killing TCs into the trash, unless you want them as an optical viewfinder aid.
[a href=\"index.php?act=findpost&pid=93121\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: eronald on January 01, 2007, 04:42:08 pm
BJL ( is that your name ?)

As for it being a scientific experiment, I never claimed that - I was speaking strictly as a photographer

However, simple geometry does indicate that when cropping a given sensor you will worsen the effects of camera shake, and equally the effects of noise as you thereby increase print magnification. The ratios are left for the mathematically inclined to work out - readers of this thread seem to be quite numerate.

Simple reasoning also indicates that neither factor will show up when tests are effected in good light where neither sensor noise nor shake are a factor ...

But your remark is interesting, maybe someone should concoct a "camera shaker" for more realistic but still scientific tests.

Edmund

Quote
Edmund, I have a basic scientific question about your experiment: what lenses, aperture sizes, shutter speeds, focal lengths and ISO speeds did you use? If you used a larger aperture size with the 1Ds2 (as would be the case if you used equal aperture ratio and lenses covering the same FOV), all you are showing is the well known speed advantage of larger aperture sizes in low light situations. There is also of course the issue of comparing camera and sensors using different technologies and of very different costs. As retired(?) engineer and forum participant Howard Smith has often said, comparisons are best done with only one parameter varied, not many.
[a href=\"index.php?act=findpost&pid=93157\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: Morgan_Moore on January 01, 2007, 05:38:54 pm
I cant see how sensor size affects CShake if you have made a suitable change in focal length to get the same FOV

Of course you can see the shake better with a higher res chip

If you think about shooting from a train it will still move the same % of the total image in the exposure time irrelevant of chip size

I think you experiment was duff

PS an earlier postor mention a large chip and reultant lack of DOF as a disadvantage - only for those looking for a big DOF !

SMM
Title: larger sensors
Post by: Ray on January 01, 2007, 05:42:13 pm
Quote
If you shoot with a 36x36mm sensor and crop to 2:3, the net outcome is exactly the same as shooting with a 24x36mm sensor. You're working with an image circle of 43.27mm either way.
[a href=\"index.php?act=findpost&pid=93138\"][{POST_SNAPBACK}][/a]

Jonathan,
I'm beginning to wonder if it was you who missed out on your morning coffee when writing that, or perhaps the hangover was too painful   . You seem to have forgotten the most well-known axiom of Pythagoras; the square on the hypotenuse equals the sum of the squares of the other 2 sides.

The diagonal of a 36x36mm sensor is almost 51mm. The consequences of putting such a sensor in a 35mm body would be more vignetting and degradation of the image in the corners with existing 35mm lenses, not to mention the problems of mirror clearance.

Happy New Year   .
Title: larger sensors
Post by: Ray on January 01, 2007, 06:19:54 pm
Quote
According to the Leica expert Erwin Puts (http://www.imx.nl/photosite/leica/technics/faq.html#Anchor-What-47857) your rule of thumb has never been verified. Erwin states that a higher shutter speed is required for best results. And we are talking about an acceptably sharp image, not maximal resolution at 24 MP.
[a href=\"index.php?act=findpost&pid=93137\"][{POST_SNAPBACK}][/a]

Bill,
Of course a rule-of-thumb is a rule-of-thumb. I never thought the 1/FL rule to be more than a statistical average for an acceptably (reasonably) sharp image with the 35mm format. If you are a Parkinson's sufferer, then the rule doesn't apply. If you're shooting from a moving elephant's back or a rocking boat, then it also doesn't apply. If you are shooting with a cell phone or P&S camera, the rule doesn't apply and, if you are a novice who is completely oblivious to the requirement to hold a camera steady for a sharp photo,. then the rule also doesn't apply.

You should have noticed that I tripled the shutter speed in my example, making a new rule of 1/3FL. Are you seriously suggesting that a shutter speed of 1/300th sec would not be adequate for a sharp hand held shot with a 100mm (non-IS) lens attached to a 5D?

If you accept that it would be adequate, then do you agree that using a higher resolving sensor will require an increase in shutter speed in proportion to the increase in resolution which, in the case of a doubling of pixel count on the same size sensor, amounts to a 1.4x increase?
Title: larger sensors
Post by: John Sheehy on January 01, 2007, 07:07:09 pm
Quote
If you resample everything to the same pixel dimensions, or print unequal MP images to the same print size, the effect of MP on motion blur is irrelevant, as long as the motion blur has a greater negative effect on resolution than pixel count, i.e. motion blur is at least 1 pixel. Your sample images prove my point.

I can't grasp what you are trying to say here.

My sample images prove that lower-MP counts in the same format size *exaggerate* motion blur, making it worse than it is in the analog world, which the higher-MP sensor approaches.

Quote
More megapixels don't make motion blur "thinner" or less noticeable; at best, they make no difference.
[a href=\"index.php?act=findpost&pid=93169\"][{POST_SNAPBACK}][/a]

How can you say that?  I have shown that more MPs makes a beneficial difference.

I could do it again with binning of real camera motion blur, if you like, if you don't think that bicubic should be used for the simulation.
Title: larger sensors
Post by: Ray on January 01, 2007, 07:29:34 pm
Quote
Yes, but a 36x36 mm sensor requires an image circle slightly larger than that of a 24x36 mm, since the diagonal is longer.

Regards,
Bernard
[a href=\"index.php?act=findpost&pid=93133\"][{POST_SNAPBACK}][/a]

Bernard,
You're absolutely right. When recently in Bangkok, on my way to Cambodia, I thought I would get a Sigma 12-24mm lens for the sake of that marginally extra width (compared with my Sigma 15-30mm). At those focal lengths, 3mm makes a substantial difference which is often needed when shooting massive structures in confined spaces, such as the temple ruins around Angkor Wat.

I tested a couple of copies of the lens in the store; was pleased with the noticeably wider angle of view, compared with my 15-30mm, but disappointed with the much worse performance at the edges and corners than my 15-30mm exhibited. I didn't buy the lens for this reason, although performance around the centre was pretty close to that of the 15-30mm.

With a 36mm square sensor, there would be many more lenses which would start revealing the poor performance similar to the Sigma 12-24 (at 12mm) in the corners.

To keep the status quo in the corners with existing lenses, a square sensor in a 35mm body would need a diagonal of around 30.6mm (still requiring a design change to accommodate a larger mirror) and the maximum size image when cropped to 3:2 proportions would be 30.6x20.4mm.

I think I'm right in saying my 15mm lens would effectively become a 17.6mm lens.

...would need a diagonal of around 30.6mm . I think I also need another cup of coffee. I mean of course, the sides of the square would be 30.6mm, which still presents a problem for mirror clearance.

As I said before, the arguments in favour of the 35mm 3:2 aspect ratio are at least as compelling as the arguments in favour of a closer-to-square format.
Title: larger sensors
Post by: John Sheehy on January 01, 2007, 08:36:14 pm
Quote
If you're keeping composition constant (which you're assuming, given your statement about the blur trail being longer), then the width of the blur trail is going to increase by the the exact same degree as the length, which is in direct proportion to the increase in sensor pixels.
[a href=\"index.php?act=findpost&pid=93134\"][{POST_SNAPBACK}][/a]
Not at all.  That is a false assumption.  Each point of light has no height or width on the analog focal plane other than what is caused by diffraction, but the airy disk's analog size is not affected by the resolution of the sensor.  No matter what the resolution of the sensor is, the trail of a blurred point of light is as long as the motion, and as wide as the airy disk (at sufficiently fast speeds, no disk actually forms, and the line is simply randomly modulated from side to side; it wiggles).  The analog airy disk touches photosites as it passes over them, and the bigger these pixels are, the wider the exposed pixel trail be, relative to frame size.
Title: larger sensors
Post by: Ray on January 02, 2007, 12:47:04 am
Quote
The paper also confirms that the rule of thumb of the reciprocal of focal length in millimeters as a guide for the exposure required for a sharp picture is a very rough approximation at best. There have been very few studies of hand held camera shake.

If anyone has additional data, please post.
[a href=\"index.php?act=findpost&pid=93184\"][{POST_SNAPBACK}][/a]

Bill,
I think most of us who have taken a few thousand shots over the years, get a 'feel' for the shutter speed necessary for a hand-held tack sharp image. I agree that 1/35mmFL does not pass muster. But there's no reason to suppose that a 24mp 35mm sensor will have higher noise, on a pixel for pixel basis, than the current 20D or 30D. ISO 800 should therfore be very usable with insignificant loss of resolution due to image degradation or in-camera noise reduction, and on the same size enlargements, noise should be actually less than that from the 30D.

On the basis of the sunny f16 rule, in good lighting 1/100th sec exposure at ISO 100 (and f16) is often sufficient for full exposure to the right. That's a 400th at f8. Factor in the benefits of IS and clean images at high ISO, there should be little problem in finding a sufficiently fast shutter speed for tack sharp hand-held images from a 24mp sensor. Look on the positive side, old chap   .
Title: larger sensors
Post by: Jonathan Wienke on January 02, 2007, 01:02:43 am
Quote
Not at all.  That is a false assumption.  Each point of light has no height or width on the analog focal plane other than what is caused by diffraction, but the airy disk's analog size is not affected by the resolution of the sensor.  No matter what the resolution of the sensor is, the trail of a blurred point of light is as long as the motion, and as wide as the airy disk (at sufficiently fast speeds, no disk actually forms, and the line is simply randomly modulated from side to side; it wiggles).  The analog airy disk touches photosites as it passes over them, and the bigger these pixels are, the wider the exposed pixel trail be, relative to frame size.

And the bigger the photosites are, the fewer of them will be impacted by the Airy disk. If diffraction is significant enough that it is covering multiple pixels, putting a higher-resolution sensor under the lens means the Airy disk will cover more pixels than before. And if the Airy disk is smaller than a single pixel, then diffraction isn't really a relevant consideration, as the main factor limiting resolution is the pixel count of the sensor, not the lens. What exactly is your point?
Title: larger sensors
Post by: Jonathan Wienke on January 02, 2007, 01:14:00 am
Quote
The diagonal of a 36x36mm sensor is almost 51mm. The consequences of putting such a sensor in a 35mm body would be more vignetting and degradation of the image in the corners with existing 35mm lenses, not to mention the problems of mirror clearance.

And as I stated previously, cropping an image from a 36x36mm sensor to 2:3 aspect ratio is no different than an uncropped image from a 24x36mm sensor. I'm well aware that a 36x36mm sensor would go outside the designed image circle of 35mm-format lenses, but at least one would have the choice of how much of the image circle to use.

My original point was simply that going to a square sensor would not necessarily mean a reduction in FOV for a given focal length compared to a 2:3 sensor. Thank you for proving my point for me. I'm well aware of the Pythagorean Theorem, note that I did mention the square sensor dimensions (30.59mm) that would use the same image circle as a 24x36mm sensor.
Title: larger sensors
Post by: Jonathan Wienke on January 02, 2007, 02:19:12 am
Quote
My sample images prove that lower-MP counts in the same format size *exaggerate* motion blur, making it worse than it is in the analog world, which the higher-MP sensor approaches.

They do no such thing. You are looking at image blur that is the result of two factors, and incorrectly calling the combined result "motion blur". You have two resolution-reducing factors at work in your test shots: pixel-size blur, and motion blur. When you combine them together, you get an effect similar to combining lens and film MTF to get a system MTF. If you change film, system MTF will be changed, but lens MTF is not changed. In exactly the same way, changing sensor pixel size affects pixel-size blur, but that has no effect on the motion blur itself, only the combined mix of motion blur and pixel-size blur. By increasing sensor resolution, you reduce the blur caused by pixel size, which significantly reduces overall image blur when motion blur is less than or approximately equal to pixel-size blur. But when motion blur becomes significantly greater than pixel-size blur, changing pixel-size blur has a negligible effect on system blur. Your example images are roughly comparable to plotting system MTF where the lens MTF is 50% and the film MTF is varied from 25% to 75%. Varying film MTF is having a significant effect on system MTF, but your lens MTF (the motion blur is not changing at all. And if you changed lens MTF to 5% (the equivalent of increasing motion blur), chaning film MTF has a negligible effect on system MTF.

If you extend your comparisons, your assertion falls apart completely. At .125X, the pixel-size blur would be so great that the motion blur would be completely insignificant. And the difference between 4x and 8x and beyond would be negligible, because the motion blur is now the primary resolution limiter, and throwing more pixels into the mix won't change that at all. So to take advantage of the extra pixels, one must reduce motion blur to the same degree that pixel size is decreased.

In order for a digital image to appear sharp, pixel-size blur must be the primary limitation on resolution. When pixel count is increased, the per-image levels of motion blur, lens aberrations, etc. must be correspondingly reduced, or else one reaches a point of diminishing returns where adding more pixels becomes a complete waste. If you don't believe me, try extending your comparison by adding 4x, 8x, 16x, and 32x images to your lineup, and you'll discover exactly what I mean.
Title: larger sensors
Post by: Ray on January 02, 2007, 02:22:04 am
Quote
And as I stated previously, cropping an image from a 36x36mm sensor to 2:3 aspect ratio is no different than an uncropped image from a 24x36mm sensor. I'm well aware that a 36x36mm sensor would go outside the designed image circle of 35mm-format lenses, but at least one would have the choice of how much of the image circle to use.
[a href=\"index.php?act=findpost&pid=93230\"][{POST_SNAPBACK}][/a]

Okay! I understand your point, Jonathan. But let's be realistic. Companies selling products can not afford to go to the markt on the basis that, 'We know in certain circumstances our images have lousy performance in the corners, but at least we're giving you more choice.

Perhaps the same principle applies to autofocussing at f8. Why does the the 20D and 5D not have this facility? Some folks, apparently, achieve it by taping over the pins. Can we expect Canon to provide an inferior autofocussing system at f8, and then explain in the manual that autofocussing at f8 is only accurate in exceptionally good light.

Can we expect Canon to design a 36x36mm sensor and then apologise for the fact that many of their lenses will exhibit unacceptable performance in the corners? I think not.
Title: larger sensors
Post by: Jonathan Wienke on January 02, 2007, 02:32:22 am
Quote
Can we expect Canon to design a 36x36mm sensor and then apologise for the fact that many of their lenses will exhibit unacceptable performance in the corners? I think not.

I'm not going to hold my breath. But that doesn't mean I wouldn't like to acquire such a camera if they did. I'd probably buy a G7 if they added RAW support, too. But they probably won't. Such is life. And why I bought an Olympus instead.
Title: larger sensors
Post by: Ray on January 02, 2007, 03:08:18 am
Quote
I'm not going to hold my breath. But that doesn't mean I wouldn't like to acquire such a camera if they did. [a href=\"index.php?act=findpost&pid=93239\"][{POST_SNAPBACK}][/a]

If they did, it would be a radical design change, not only of sensor, but of camera body proportions to accommodate the larger mirror. Their existing wide-ange lenses, which are not Canon's strong point, would be shown in an even worse light. Their TS-E 24mm would be ridiculously poor at the extremities of shift; customers would be screaming for better quality lenses and the executive director who made the decision to go for a 36x36mm sensor would be fired.

Need I say more   .
Title: larger sensors
Post by: eronald on January 02, 2007, 07:11:50 am
Ray,

 I am not quite ready to agree here. Let us look at this more closely -
 
Take a fixed point of light at image distance. Open and close the shutter. The shake is materialised by the track exposed on the sensor due to camera movement in the time the shutter was open.

My assertions:

1. What matters for the photographer is the measured metric length of the printed track on the enlargement (centimetres), or the ratio of that measured length to the size of the print, but not the number of pixels in there. So, let us choose some fixed printed size, eg. 8x10. Then we see that what matters to determine shake is the enlargement factor of this track to print size, not the absolute resolution.

2. Going to a crop-frame camera (with a reduction in lens focal length) wll up the enlargement factor and increase the effects of shake (on fixed-sized prints). Large-format cameras should display less shake effects when fixed-size results are compared eg. 8x10.

Now it's time for my morning coffee.

Edmund
Quote
Bill,
Of course
If you accept that it would be adequate, then do you agree that using a higher resolving sensor will require an increase in shutter speed in proportion to the increase in resolution which, in the case of a doubling of pixel count on the same size sensor, amounts to a 1.4x increase?
[a href=\"index.php?act=findpost&pid=93201\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: bjanes on January 02, 2007, 07:58:48 am
Quote
Bill,
I think most of us who have taken a few thousand shots over the years, get a 'feel' for the shutter speed necessary for a hand-held tack sharp image. I agree that 1/35mmFL does not pass muster. But there's no reason to suppose that a 24mp 35mm sensor will have higher noise, on a pixel for pixel basis, than the current 20D or 30D. ISO 800 should therfore be very usable with insignificant loss of resolution due to image degradation or in-camera noise reduction, and on the same size enlargements, noise should be actually less than that from the 30D.

[a href=\"index.php?act=findpost&pid=93227\"][{POST_SNAPBACK}][/a]

What is tack sharp to you may not be so to others. Most serious landscape photographers use a tripod in nearly all cases and do not rely on hand holding. Why do you think this is? Each doubling of ISO haves the dynamic range and there is no such thing as a free lunch.
Title: larger sensors
Post by: John Sheehy on January 02, 2007, 08:42:50 am
Quote
What is tack sharp to you may not be so to others. Most serious landscape photographers use a tripod in nearly all cases and do not rely on hand holding. Why do you think this is? Each doubling of ISO haves the dynamic range and there is no such thing as a free lunch.
[a href=\"index.php?act=findpost&pid=93261\"][{POST_SNAPBACK}][/a]

Well, that depends on the camera (and your definition of DR).  In an ideal camera, where shot noise is the only noise (and there is no quantization), ISO/exposure follows the simple rule (one less stop of DR for each doubling of exposure index).  In real cameras, it can go that way, or it can be like the recent Canons, where ISO 1600 has as little as 1.5 stops less DR than ISO 100; at least that is the difference between 1:1 S:N ratios at ISOs 100 and 1600.

I would venture to say that many people who have cameras that have less absolute noise at the higher ISO don't understand the implications, and under-expose at the lowest ISO rather than ETTR at a higher ISO, which would give less noise.  If lighting permits, of course, ETTR is better yet at the lower ISOs, but some cameras have compromised highlights at their lowest ISO due to the manufacturer trying to force ISO 100 or 50 when the camera is really only capable of 120 or 70.
Title: larger sensors
Post by: John Sheehy on January 02, 2007, 09:02:49 am
Quote
And the bigger the photosites are, the fewer of them will be impacted by the Airy disk.

But those fewer pixels occupy a greater total percentage of sensor area.

Quote
If diffraction is significant enough that it is covering multiple pixels,

My argument does not depend on the airy disk.  The airy disk is my concession about a point of light possibly having width on the focal plane.  You originally said that the width of the path of a point of light increases in proprtion to the length, in pixels, in a higher-MP sensor of the same size.

Quote
putting a higher-resolution sensor under the lens means the Airy disk will cover more pixels than before.

Again, at a smaller percentage of total sensor (image) area!

Quote
And if the Airy disk is smaller than a single pixel, then diffraction isn't really a relevant consideration, as the main factor limiting resolution is the pixel count of the sensor, not the lens. What exactly is your point?
[a href=\"index.php?act=findpost&pid=93228\"][{POST_SNAPBACK}][/a]

My point is that given the same camera shake, with the same sensor frame size and same lens, the higher-MP camera will have its image less damaged by the motion.  They don't call increased MP "more resolution" for nothing.  More MPs means more ability to resolve (avoid confusion).

Again, diffraction is a concession about width of a streak of light from a point source; not a contingency for my argument.  A true width-less line will touch more pixels, true, but they are smaller pixels, and represent less of the total area of the frame.
Title: larger sensors
Post by: eronald on January 02, 2007, 09:03:28 am
Look, I know little (comparatively) about the scientific aspects of digital imaging, but I do take pictures in bursts of a few hundred, handheld, in bad light, when I'm at the Paris fashion shows.
But the only camera which I've ever had real shake problems with is the Leica M8 in daylight - go figure ? where I'm routinely doubling or more the speed I'd use on a fullframe SLR.

Edmund
Title: larger sensors
Post by: John Sheehy on January 02, 2007, 09:26:01 am
Quote
They do no such thing. You are looking at image blur that is the result of two factors, and incorrectly calling the combined result "motion blur".

Yes, the analog blur before being binned by photosites is always there, but the spatial sampling is what makes or breaks it.  You will never get a break on the effects of motion blur by having less and larger pixels; they will always make things somewhere from slightly to horrendously worse.  The notion that more and smaller pixels requires more steady technique is nonsense; it is only true if you're going to make a MP-equivalent crop of the low-MP version in the high-MP camera, and expect them to compete at the same viewing size.

Quote
You have two resolution-reducing factors at work in your test shots: pixel-size blur, and motion blur. When you combine them together, you get an effect similar to combining lens and film MTF to get a system MTF. If you change film, system MTF will be changed, but lens MTF is not changed. In exactly the same way, changing sensor pixel size affects pixel-size blur, but that has no effect on the motion blur itself, only the combined mix of motion blur and pixel-size blur.

We never see the motion blur by itself.  We always see it through the eyes of pixels.

Quote
If you extend your comparisons, your assertion falls apart completely. At .125X, the pixel-size blur would be so great that the motion blur would be completely insignificant.

Nothing falls apart.  My point is that higher resolution can salvage motion blur in some cases, and it *NEVER* hurts, at the image level.

In my examples, all the arcs were clearly resolved at all intermediate resolutions.  Perhaps, in retrospect, I should have displayed them.

Quote
And the difference between 4x and 8x and beyond would be negligible, because the motion blur is now the primary resolution limiter, and throwing more pixels into the mix won't change that at all. So to take advantage of the extra pixels, one must reduce motion blur to the same degree that pixel size is decreased.

In order for a digital image to appear sharp, pixel-size blur must be the primary limitation on resolution. When pixel count is increased, the per-image levels of motion blur, lens aberrations, etc. must be correspondingly reduced, or else one reaches a point of diminishing returns where adding more pixels becomes a complete waste. If you don't believe me, try extending your comparison by adding 4x, 8x, 16x, and 32x images to your lineup, and you'll discover exactly what I mean.
[a href=\"index.php?act=findpost&pid=93237\"][{POST_SNAPBACK}][/a]

You have asked me what my point is, and I have answered.  I still don't know what your point is.  Your point seems to be that in some cases, the higher-MP doesn't help.  So what?  There are always common denominators.  We could have the lens set to f/81, and all the extra resolution will be mostly wasted.  What relevance does that have to using a sharp lens at f/8?
Title: larger sensors
Post by: Jonathan Wienke on January 02, 2007, 12:27:40 pm
Quote
My point is that higher resolution can salvage motion blur in some cases, and it *NEVER* hurts, at the image level.

I agree with the second point, but your first is completely incorrect. Sensor resolution NEVER alters the degree of blur imparted to the image by camera shake, lens aberration, etc. When you have 2 approximately equal blur-inducing factors at work in the same image, reducing the degree of one factor can increase overall image resolution, but that does NOT mean that you're actually reducing both blur factors.

Quote
In my examples, all the arcs were clearly resolved at all intermediate resolutions.  Perhaps, in retrospect, I should have displayed them.
You have asked me what my point is, and I have answered.  I still don't know what your point is.  Your point seems to be that in some cases, the higher-MP doesn't help.

My point is that you are making several fraudulent claims:

1: Higher-resolution cameras are less susceptible to motion blur than lower-resolution cameras, all else equal.

2: Motion blur has less of an effect on an image as sensor resolution increases.

Both of these are demonstrably false. Let's conduct a little thought experiment, and perhaps you'll finally understant your error. Imagine a tripod-mounted camera set up next to the finish line of a race track with a 100-degree FOV. It is aimed perpendicular to the track at the finish line, and is triggered by a motion sensor so that the as a car crosses the finish line (which is centered in the FOV), the shutter fires. The shutter speed is chosen such that the car travels through 1 degree of the camera's FOV during exposure.

We start out with a 1000-pixel-wide sensor. The motion blur of the car is 10 pixels long, and if one makes an 8x10 print, the blur is 0.1 inches long on the paper.
[attachment=1452:attachment]

Now we substitute a 2000-pixel-wide sensor. The motion blur is now 20 pixels long. When we make our 8x10 print, the blur is still 0.1 inches long. While the stationary background of the image is noticeably clearer due to the increased sensor resolution, the car itself is not resolved with significantly more detail than in the print made from the 1000-pixel sensor. Why? Because the motion blur is the primary resolution-limiting factor, not the pixel count of the sensor.
[attachment=1453:attachment]

If you compare my two sample images, you'll note that the car is virtually identical between them. The length of the motion blur is identical, the difficulty of reading the lettering on the car is identical; overall the differences are very subtle. The only real improvement to be found is in the horizontal lines of the car (top/bottom of the window, grille, etc.), which are not affected by the motion blur. Other than that, the additional sensor resolution has not helped resolve the car any better, because the motion blur is a much more significant factor than the pixel-size blur in both images.

Quote
The notion that more and smaller pixels requires more steady technique is nonsense; it is only true if you're going to make a MP-equivalent crop of the low-MP version in the high-MP camera, and expect them to compete at the same viewing size.

It's far from nonsense, it's inescapable physics. The whole point of increasing sensor resolution is to capture more overall image detail. When camera shake, motion blur, focus errors, or lens aberrations degrade resolution to a greater degree than pixel-size blur, the advantage of adding additional sensor pixels is compromised or for all practical intents eliminated. The smaller your sensor pixels are, the easier it is for other blurring factors to overwhelm pixel-size blur and degrade image quality to something far less than it could otherwise be.

That is why people who compared the 1Ds and 1Ds-MkII generally found less of an image quality advantage by upgrading to the 1Ds-MkII than might be expected solely from the difference in pixel count and the generational improvement of the 1Ds-II's sensor. The 11MP 1Ds is already quite demanding on most available lenses, and a lens that struggles to satisfy the 1Ds has an even more difficult time meeting the demands of the 1Ds-MkII. When aberrations, whatever their cause, are > 1 pixel in size, there is very little benefit to throwing more pixels at the problem. The more significant the aberrations are, the less significant the benefit derived from the extra pixels will be. And the smaller the pixels are, the easier it is for the aberrations to negate the benefits of additional pixels.

If your assertions were correct, then handheld MFDBs would be in common use for shooting action sports. Perhaps you should consider why smaller, lower-resolution formats are most commonly used for such tasks.
Title: larger sensors
Post by: BJL on January 02, 2007, 12:52:28 pm
Quote
BJL ( is that your name ?)

However, simple geometry does indicate that when cropping a given sensor you will worsen the effects of camera shake, and equally the effects of noise as you thereby increase print magnification.
[a href=\"index.php?act=findpost&pid=93190\"][{POST_SNAPBACK}][/a]
Firstly, as to my name: I am perhaps paranoid about broadcasting my real name too much on the SPAM infested internet, but since you sort of asked, my first name is Brenton.

About shake, simple geometry suggest to me that the effect of shake is measured by the ratio of the angular movement during the exposure time to the angular FOV. So of course if you crop to a narrower FOV and use the same exposure time, you expect shake effects to be more visible. But I would not expect any increase in visible camera hake blurring from using a smaller formats to record an image covering the same angular FOV (E.g. using a focal length and sensor that are smaller in the same proportion.) That is what I asked about details like the focal lengths uses with the two cameras.

Unless of course one allows for the greater moment of inertia (due to greater size and weight) of the 1DsMkII --- but weight can easily be added to a camera.
Title: larger sensors
Post by: eronald on January 02, 2007, 03:22:47 pm
With due respect, Jonathan, I believe you are choosing a bad example with the 1Ds successors.

My opinion is that the sensor, postprocessing and focus of the 1DsII was botched, to the extent that it's much worse than the 1Ds. A better comparison would be something like the 20D and 60D. I put my money where my mouth is: I don't take the 1DsII to fashion shows in spite of its nominally higher ISOs because shooting them side by side showed me that the 1Ds is more usable in practice.

Edmund

Quote
It's far from nonsense, it's inescapable physics. The whole point of increasing sensor resolution is to capture more overall image detail. When camera shake, motion blur, focus errors, or lens aberrations degrade resolution to a greater degree than pixel-size blur, the advantage of adding additional sensor pixels is compromised or for all practical intents eliminated. The smaller your sensor pixels are, the easier it is for other blurring factors to overwhelm pixel-size blur and degrade image quality to something far less than it could otherwise be.

That is why people who compared the 1Ds and 1Ds-MkII generally found less of an image quality advantage by upgrading to the 1Ds-MkII than might be expected solely from the difference in pixel count and the generational improvement of the 1Ds-II's sensor. The 11MP 1Ds is already quite demanding on most available lenses, and a lens that struggles to satisfy the 1Ds has an even more difficult time meeting the demands of the 1Ds-MkII. When aberrations, whatever their cause, are > 1 pixel in size, there is very little benefit to throwing more pixels at the problem. The more significant the aberrations are, the less significant the benefit derived from the extra pixels will be. And the smaller the pixels are, the easier it is for the aberrations to negate the benefits of additional pixels.

If your assertions were correct, then handheld MFDBs would be in common use for shooting action sports. Perhaps you should consider why smaller, lower-resolution formats are most commonly used for such tasks.
[a href=\"index.php?act=findpost&pid=93290\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: Ray on January 02, 2007, 05:35:59 pm
Quote
Going to a crop-frame camera (with a reduction in lens focal length) wll up the enlargement factor and increase the effects of shake (on fixed-sized prints). Large-format cameras should display less shake effects when fixed-size results are compared eg. 8x10.
Edmund
[a href=\"index.php?act=findpost&pid=93257\"][{POST_SNAPBACK}][/a]

Edmund,
This is what I also believe to be broadly true. The question has often been asked on this forum, 'Does the 1/FL rule apply to cropped format cameras such as the D60, 10D etc?' The answer has always been (from Michael R as well), no. The rule becomes 1/1.6FL or 1/35mmFL where 35mmFL is the 'effective' focal length in 35mm terms.

However, there is an assumption in this answer that needs to be spelled out, and a few minor discrepancies resulting from different pixel densities in cameras being compared. The faster than 1/FL shutter speed with the smaller format is only required if the intention is to enlarge the smaller format image to the same size as the larger format image.

I would imagine if different size prints are compared that represent the native resolution of both cameras (ie. neither uprezzing nor downrezzing has taken place for printing purposes), then (for same FoV scenes) in situations where a 1/80th shutter speed would be appropriate for a 1Ds2 with 80mm lens, a 1/50th shutter speed would be appropriate for a D60 with 50mm lens.

I should add, for the benefit of Bill Janes, that the question as to whether or not the 1/FL rule is adequate for sharp images is quite irrelevant. I use it purely for illustrative purposes. One has to have a reference point. Make it 1/4FL if you like.

The question for me is, having established an adequate shutter speed for a tack sharp image with a given lens and format, how much faster does that shutter speed need to be if we increase the pixel count, and print size in proportion, but keep the format and lens the same?

My view is, we should increase the shutter speed in proportion to the increased resolving power of the sensor, but not for equal size prints, but for prints that express the native resolution of both sensors, at a ppi sufficient to convey the maximum detail to the paper, that can be detected by anyone with normal vision from a 'reading' distance.
Title: larger sensors
Post by: eronald on January 02, 2007, 05:57:48 pm
Establish a ttrace length on the enlarged print that corresponds to the max allowable shake. Then bactrack from there ?

Edmund

Quote
The question for me is, having established an adequate shutter speed for a tack sharp image with a given lens and format, how much faster does that shutter speed need to be if we increase the pixel count, and print size in proportion, but keep the format and lens the same?

My view is, we should increase the shutter speed in proportion to the increased resolving power of the sensor, but not for equal size prints, but for prints that express the native resolution of both sensors, at a ppi sufficient to convey the maximum detail to the paper, that can be detected by anyone with normal vision from a 'reading' distance.
[a href=\"index.php?act=findpost&pid=93352\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: bjanes on January 02, 2007, 06:05:22 pm
Quote
Edmund,

I should add, for the benefit of Bill Janes, that the question as to whether or not the 1/FL rule is adequate for sharp images is quite irrelevant. I use it purely for illustrative purposes. One has to have a reference point. Make it 1/4FL if you like.

The question for me is, having established an adequate shutter speed for a tack sharp image with a given lens and format, how much faster does that shutter speed need to be if we increase the pixel count, and print size in proportion, but keep the format and lens the same?

My view is, we should increase the shutter speed in proportion to the increased resolving power of the sensor, but not for equal size prints, but for prints that express the native resolution of both sensors, at a ppi sufficient to convey the maximum detail to the paper, that can be detected by anyone with normal vision from a 'reading' distance.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93352\")

I would agree with Ray here. However, as pointed out in the Stanford article, the ability to increase the shutter speed may require higher ISO and this may be limited by small pixels. This is usually not a problem with daylight, but it is with available light exposure.

BTW, the linear blur discussed in the article can be reduced with Photoshop's smart sharpen filter, but the random walk type of blur with longer exposure times is more problematic.

There is a trade off between pixel size dynamic range and noise, especially at high ISO. Ray's 25 MP 5 um pixel full frame camera would be great for landscapes but not so good for available light work. Presently, according to most surveys, most 35 mm type digital users are more interested in low noise, high DR, and good high ISO performance. Canon will introduce Ray's camera they can control the noise, IMHO. I still maintain that the use of a tripod would be necessary to take advantage of the increased resolution, but a tripod would not be necessary to have the same detail as with a lower MP camera.

[a href=\"http://www.clarkvision.com/imagedetail//does.pixel.size.matter2/]Roger Clark[/url] has posted an addendum to his post on pixel size. It demonstrates some of these topics with pictures of a night scene.

Bill
Title: larger sensors
Post by: Ray on January 02, 2007, 09:52:44 pm
Quote
There is a trade off between pixel size dynamic range and noise, especially at high ISO. Ray's 25 MP 5 um pixel full frame camera would be great for landscapes but not so good for available light work. [a href=\"index.php?act=findpost&pid=93362\"][{POST_SNAPBACK}][/a]

I don't see it, Bill. It's almost as though you are saying all technological development with regard to improved image quality has come to an end and that more pixels just means more read-noise.

We know with the 400D (if Canon is to be believed) that they reduced the gap between microlenses, which results in a greater amount of light reaching the individual photodiodes than would otherwise take place. It's not clear if the 400D microlenses are the same size as the 30D microlenses, as a result of reducing that gap, or just closer to that size than they otherwise would be.

Nor is it clear if the actual photodiodes themselves are the same size or smaller. As you know, pixel pitch is always considerably larger than photodiode size on a CMOS sensor. The first Canon DSLR, the D30, had a pixel pitch of around 10 microns, but a photodiode size of only 5.25 microns. The rest of the space was presumably taken up with on-chip processors.

Unless you are really 'in the know', or a research scientist at one of Canon's laboratories, it cannot be clear what improvements are potentially there to be made. When Canon announced they had reduced the microlens gap in the 400D, I was surprised because I had assumed the gap was already as small as it could be.

For all I know, when Canon introduce their 24, 25 or 22mp FF 35mm sensor, they might also claim an improved dynamic range over previous models, at ISO 50, due to increasing the actual size of the photodiode and reducing the size of the on-chip processors, or sticking the on-chip processors on the other side of the chip, or creating a separate chip for all, or some of the processing.

Of course, it almost goes without saying, if you want increased dynamic range in a system that is mostly limited by photonic noise, it has to be through increased exposure. You can't have increased dynamic range as well as faster shutter speeds. No matter how many pixels are on the sensor, the sensor as a whole receives the same amount of light for a given exposure at a given f stop.

I'll have to edit this in case someone tries to argue that a 200mm lens at f8 lets more light pass for a given exposure than a 50mm lens at f8   . I am of course referring to a situation of equal size sensors, ie. equal formats.
Title: larger sensors
Post by: bjanes on January 02, 2007, 11:12:18 pm
Quote
I don't see it, Bill. It's almost as though you are saying all technological development with regard to improved image quality has come to an end and that more pixels just means more read-noise.

[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93390\")

The physics of CMOS and CCD is very well understood. Improvements will be made, but physical limits are being approached. Current Canon sensors have read noise of about 3-4 electrons and microlens technology is relatively advanced. The Poisson distribution still applies to photon sampling. Fill factors are relatively high. There could be a two fold or greater improvement in quantum efficiency. Finally, the improvements made for small pixels will also apply to large pixels. Pixel size does matter.

Large and small pixels have similar read noise, but the effect on the image is much greater with small pixels because the large pixel accumulates many more electrons (i.e. the large pixel has greater gain).

Why don't you read Roger Clark's article?

[a href=\"http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/]http://www.clarkvision.com/imagedetail/dig...rmance.summary/[/url]
Title: larger sensors
Post by: Ray on January 02, 2007, 11:44:30 pm
Quote
The physics of CMOS and CCD is very well understood.

It is indeed, Bill. But by whom? That's the question? Certainly not by me, and without intending to offend you, probably not by you, or Roger Clark.

Since I first started reading LL, and other similar forums, I've come across a littany of so-called experts claiming that fewer big pixels are better than more small pixels. (There must be a grammatically neat way of expressing that   ).

They've mostly proved to be wrong, with the passage of time. A 10D pixel, a 20D pixel and a 1Ds2 pixel is better than a 1Ds, D60 or D30 pixel, from the point of view of all the things that count with regard to image quality. Is this not true?
Title: larger sensors
Post by: Ray on January 03, 2007, 01:40:13 am
Quote
Current Canon sensors have read noise of about 3-4 electrons and microlens technology is relatively advanced. [a href=\"index.php?act=findpost&pid=93399\"][{POST_SNAPBACK}][/a]

A quick calculation tells me a 35mm sensor with a pixel pitch of 5.25 microns contains around 30mp. Microlenses may not be required, Bill.

How does this grab you? No microlens. No AA filter. On one side of the chip nothing but wall to wall photodiodes 5 microns in dimaeter. On the reverse side of the chip, nothing but analog amplifiers, A/D converters and processors of various types and function. Somewhere else, the most powerful computer and processing algorithms to date, in a Canon camera.
Title: larger sensors
Post by: eronald on January 03, 2007, 01:48:34 am
Note the way the pixels work is changing - on the recent Canons, pixel values are the delta between the start and end of exposure.

As to who knows about this ? I guess any of us engineers could go to an imaging systems design conference and get the latest info. All the companies are eager to publish some details of the chips they put out, once the design is finished, if only as a recruiting tool.

Edmund


Quote
It is indeed, Bill. But by whom? That's the question? Certainly not by me, and without intending to offend you, probably not by you, or Roger Clark.

Since I first started reading LL, and other similar forums, I've come across a littany of so-called experts claiming that fewer big pixels are better than more small pixels. (There must be a grammatically neat way of expressing that   ).

They've mostly proved to be wrong, with the passage of time. A 10D pixel, a 20D pixel and a 1Ds2 pixel is better than a 1Ds, D60 or D30 pixel, from the point of view of all the things that count with regard to image quality. Is this not true?
[a href=\"index.php?act=findpost&pid=93403\"][{POST_SNAPBACK}][/a]
Title: larger sensors
Post by: BJL on January 03, 2007, 01:12:09 pm
Quote
Large and small pixels have similar read noise, but the effect on the image is much greater with small pixels because the large pixel accumulates many more electrons (i.e. the large pixel has greater gain).
[a href=\"index.php?act=findpost&pid=93399\"][{POST_SNAPBACK}][/a]
You assume that sensor dynamic range will always be limited by a maximum recordable signal (photo-electron count), limited in turn by maximum well capacity, with this limit higher with photo-sites of larger area. Maybe so, but this is far from certain for all possible sensor technologies, which might be able to make maximum photo-electron counts irrelevant beyond a quite modest photo-site size.

One current possibility is technologies like Fujifilm's SuperCCD SR, expanding highlight headroom beyond the standard well capacity limits (of about 1000e per square micron). Yes, larger sensors can expand it even more, but the law of diminishing returns sets in once a given photo-site size gives more than enough DR for almost any situation.

Another possibility is that a future sensor could read and reset electron wells multiple times during an exposure, eliminating any maximum well capacity. Perhaps some version of progressive scan as used for video, but at far higher than normal video frame-rates.

Another idea, better for keeping shadow noise levels down, is already implemented in sensors for video surveillance cameras. Roughly, a sensor could allow highlight photo-sites to fill up, measure the time taken for such photo-sites to fill, and extrapolate to the photo-electron count that would have occurred over the full exposure time.
If this full well count is at least as high as the signal given by mid-tones in the Canon 5D at ISO 100, about 5,000 electrons, the S/N ratio at such a photo-site is dominated by photon shot noise and is about 70:1 or better. This is easily enough to avoid visible noise, as indicated by the lack of noise problems in mid-tones from the 5D at ISO 100. At a rough estimate, 5,000e capacity is possible with pixel pitch of under 2.7 microns. (The 5.4 micron photo-sites of the Olympus E-500 sensor have capacity 25,000e, so 2.7 microns should be capable of 1/4 that.) That pixel size would be enough for over 50MP in DX format, and probably enough to put the resolution limits entirely on SLR lenses

I believe that the surveillance video sensors repeatedly check each photo-site to see if it is close to full at 1/2, 1/4, 1/8 etc. of the full exposure time: near full photo-sites are read and A/D converted right at the photo-site, adjusting for the reduced exposure time by bit shifts.


Such approaches could essentially allow arbitrarily low ISO speeds for good shadow handling. That leaves shutter speed limits determined mainly by maximum usable aperture diameter, which is indirectly related to format size through optical limitations on how low aperture ratios can be.
Title: larger sensors
Post by: Morgan_Moore on January 03, 2007, 01:12:40 pm
A car is traveling at one meter per second

An exposure is made of one second

The car has one meter of blur

Irrelevant of capture method / size of sensor / lenght of thread etc

With a higher resolution you will be able to appreciate the blur in greater detail

Tell me I am wrong ?    
Title: larger sensors
Post by: bjanes on January 03, 2007, 04:25:35 pm
Quote
It is indeed, Bill. But by whom? That's the question? Certainly not by me, and without intending to offend you, probably not by you, or Roger Clark.

Since I first started reading LL, and other similar forums, I've come across a littany of so-called experts claiming that fewer big pixels are better than more small pixels. (There must be a grammatically neat way of expressing that   ).

They've mostly proved to be wrong, with the passage of time. A 10D pixel, a 20D pixel and a 1Ds2 pixel is better than a 1Ds, D60 or D30 pixel, from the point of view of all the things that count with regard to image quality. Is this not true?
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93403\")

No offense taken, Ray, since I do not claim to be an expert. At least I have the sense to listen to those who know more than I do. Roger Clark is an imaging expert, having received his PhD in planetary science from MIT and having worked on NASA imaging probes. In addition, he has published 179 peer reviewed scientific papers.

[a href=\"http://www.clarkvision.com/rnc/index.html]Bio Roger N Clark[/url]

Also, the illustrious host of this site has written a good essay on this subject. He is not an engineer, but he does have the ear of many experts from Thomas Knoll to the engineers at the Dalsa chip FAB, which he recently visited.

Micheal's Sense & Sensor (http://www.luminous-landscape.com/essays/sensor-design.shtml)

In the other corner, we have Ray.      

Bill
Title: larger sensors
Post by: Ray on January 03, 2007, 06:17:18 pm
Quote
In the other corner, we have Ray.      
[a href=\"index.php?act=findpost&pid=93523\"][{POST_SNAPBACK}][/a]

I understand what you're implying, Bill. Accepting that someone is right because he has a stack of publications behind his name, a few awards and perhaps a PhD, is very easy. It's something that appeals to our inbred respect for authority.

However, as an Australian, I don't have much regard for authority   . There's a tradition here of throwing off the shackles of our imperial masters.

It seems I take a more philosophical view of these matters than you. The most brilliant experts in their field can often be wrong (and invariable are proved to be wrong, in the fullness of time, on at least a few issues). Even Eistein appears to have been wrong on a few counts.

We're talking here about cutting edge matters in a very specialised branch of physics. Developments at the coal face are no doubt the subject of NDAs and neither Roger Clark nor Michael may be privy to them, for all I know. At least I wouldn't too readily assume that they are.

On matters of what's possible through the application of the scientific method, I'm an optimist. I get the impression, Bill, that you are more concerned with why things can't be done.
Title: larger sensors
Post by: howiesmith on January 03, 2007, 06:31:38 pm
Quote
On matters of what's possible through the application of the scientific method, I'm an optimist.

[a href=\"index.php?act=findpost&pid=93534\"][{POST_SNAPBACK}][/a]

I have found the difference between optimists and pessimists is pessimists have better data.

Ray, are you saying you have a good chance of being right because you are ignorant?
Title: larger sensors
Post by: Ray on January 03, 2007, 07:10:26 pm
Quote
Ray, are you saying you have a good chance of being right because you are ignorant?
[a href=\"index.php?act=findpost&pid=93536\"][{POST_SNAPBACK}][/a]

No, I'm saying (or at least implying) that we are all ultimately ignorant, but some more than others. What is perhaps more important is knowing that you don't know rather than believing that you do know.

Of course it can get a bit complicated because it's possible to to believe that you don't know when in fact you do know. You just didn't know that you knew.  
Title: larger sensors
Post by: howiesmith on January 03, 2007, 07:39:52 pm
Quote
No, I'm saying (or at least implying) that we are all ultimately ignorant, but some more than others. What is perhaps more important is knowing that you don't know rather than believing that you do know.

Of course it can get a bit complicated because it's possible to to believe that you don't know when in fact you do know. You just didn't know that you knew. 
[a href=\"index.php?act=findpost&pid=93540\"][{POST_SNAPBACK}][/a]

So, are you saying that the less one knows (more ignorant), and the less one knows he knows (the more ignorant he knows he is), the better his chances of being right?

There is plenty I don't know, and plenty I do know.  The best engineers I ever met were the ones that knew enough to tell the difference between "smoke and mirrors" and what might be true.  And were educated enough to determine the difference.

FYI, Einstien was sometime wrong, and he knew it.  He was just right enough and within the assumptions he made and stated.  Newton was sometimes wrong but plenty close enough for day to day stuff.  Knowing what can be neglected and still get useable results is important.

"Accepting that someone is right because he has a stack of publications behind his name, a few awards and perhaps a PhD, is very easy."  This can be dangerous, but a safer bet than a hipshot from an uneducated Aussie, even with an inquiring mind.  I should point out I have never met nor worked with a real Aussie, so I'm just speculating.
Title: larger sensors
Post by: Ray on January 03, 2007, 07:52:36 pm
Quote
So, are you saying that the less one knows (more ignorant), and the less one knows he knows (the more ignorant he knows he is), the better his chances of being right?
[a href=\"index.php?act=findpost&pid=93545\"][{POST_SNAPBACK}][/a]

No. I'm saying that human pride and egotism can lead a person into believing he knows more than he actually does know and that the ability (or talent) to know that you don't know (rather than kid yourself that you do know, for all sorts of face-saving, status seeking, ego boosting reasons etc etc) is very much underrated.
Title: larger sensors
Post by: howiesmith on January 03, 2007, 07:58:55 pm
Quote
No. I'm saying that human pride and egotism can lead a person into believing he knows more than he actually does know and that the ability (or talent) to know that you don't know (rather than kid yourself that you do know, for all sorts of face-saving, status seeking, ego boosting reasons etc etc) is very much underrated.
[a href=\"index.php?act=findpost&pid=93547\"][{POST_SNAPBACK}][/a]

So how does Ray know the difference between smoke and mirros and a humble but well published PhD ?
Title: larger sensors
Post by: Ray on January 03, 2007, 08:21:11 pm
Quote
So how does Ray know the difference between smoke and mirros and a humble but well published PhD ?
[a href=\"index.php?act=findpost&pid=93548\"][{POST_SNAPBACK}][/a]

Ultimately, Howard, it's up to each of us to decide what makes sense, what's meaningful, what's useful, what's right or wrong. How we decide such matters is determined by our entire world experience, our education, our genetics, our environmnet and everything that has happened to us from the time, and including the time, that we were in the womb.

But to get back to what started this little diversion, both you and Bill seem to be implying that someone like Roger Clark has written a PhD thesis on the current state of CMOS developments in Canon's laboratories and that I, from a position of great relative ignorance, am disputing his conclusions.

Well, of course that's not true, is it. That's not what has happened. As I see it, Bill is using the credentials and comments of a qualified physicist who has a PhD gained some years ago and which is not necessarily directly relevant to cutting edge developments in CMOS imaging, in order to support what I consider to be a negative point of view with regard to future possibilities.
Title: larger sensors
Post by: bjanes on January 03, 2007, 09:16:20 pm
Quote
However, as an Australian, I don't have much regard for authority   . There's a tradition here of throwing off the shackles of our imperial masters.
[a href=\"index.php?act=findpost&pid=93534\"][{POST_SNAPBACK}][/a]

It is us Americans that have fought two wars with the imperial masters, whereas (according to the US State Department notes) you Aussies still accept the British monarch as sovereign and still have the Union Jack on your national flag. In this historical context, we are more independent minded than you.  

Contrarians are occasionally correct, but if someone offered me an opportunity to invest in his perpetual motion machine, I would respectfully decline.

Bill

BTW, from what part of Australia do you come? Also, most of us have a high regard for the British, the above difficulties notwithstanding.
Title: larger sensors
Post by: Ray on January 03, 2007, 09:50:27 pm
Quote
It is us Americans that have fought two wars with the imperial masters, whereas (according to the US State Department notes) you Aussies still accept the British monarch as sovereign and still have the Union Jack on your national flag. In this historical context, we are more independent minded than you. 

That's true, Bill. We didn't have to fight a war to gain our independence, but the psychological shackles are still there. It's interesting, perhaps comforting, that all 3 of us, the Americans, the British and the Australians, with their common heritage, are in Iraq. But very discomforting that this escapade seems a complete debacle. (Sorry! Completely off-track).

Quote
Contrarians are occasionally correct, but if someone offered me an opportunity to invest in his perpetual motion machine, I would respectfully decline.

Are you referring to the full-blooded aboriginal pictured on our $50 bill, David Unaipon, who had an obsession with perpetual motion? I see you've done your research well   .

Quote
BTW, from what part of Australia do you come? Also, most of us have a high regard for the British, the above difficulties notwithstanding.

I am British. I'm Australian as well, having emigrated here about 30 years ago.
Title: larger sensors
Post by: John Sheehy on January 03, 2007, 10:17:23 pm
Quote
A car is traveling at one meter per second

An exposure is made of one second

The car has one meter of blur

Irrelevant of capture method / size of sensor / lenght of thread etc

With a higher resolution you will be able to appreciate the blur in greater detail

Tell me I am wrong ?   
[a href=\"index.php?act=findpost&pid=93502\"][{POST_SNAPBACK}][/a]

Simply put, with a higher resolution, your captured blur will more closely resemble the analog blur.  A lower resolution would confuse things further, either a tiny amount or a lot.
Title: larger sensors
Post by: Ray on January 03, 2007, 11:00:27 pm
Quote
Simply put, with a higher resolution, your captured blur will more closely resemble the analog blur.  A lower resolution would confuse things further, either a tiny amount or a lot.
[a href=\"index.php?act=findpost&pid=93568\"][{POST_SNAPBACK}][/a]

I agree with John that there's no advantage in lower resolution. Four small pixels covering the same area as one large pixel, could ideally produce the same (or close to) dynamic range, at the same resolution or print size, as the one large pixel. But the four smaller pixels have the advantage, lens quality permitting, of producing higher resolution, with good light conditions.
Title: larger sensors
Post by: bjanes on January 04, 2007, 07:05:00 am
Quote
Of course, it almost goes without saying, if you want increased dynamic range in a system that is mostly limited by photonic noise, it has to be through increased exposure. You can't have increased dynamic range as well as faster shutter speeds. No matter how many pixels are on the sensor, the sensor as a whole receives the same amount of light for a given exposure at a given f stop.

[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93390\")

No, this is a common misconception and is explained at length by Roger Clark on his website:

[a href=\"http://www.clarkvision.com/photoinfo/f-ratio_myth/index.html]http://www.clarkvision.com/photoinfo/f-ratio_myth/index.html[/url]

Other things being equal, for a given f/stop and exposure time, the number of photons per second per square micron arriving in the focal plane will be the same, but the  camera with larger pixels will collect more photons because of its larger area.

Bill
Title: larger sensors
Post by: John Sheehy on January 04, 2007, 09:36:01 am
Quote
Why don't you read Roger Clark's article?
[a href=\"index.php?act=findpost&pid=93399\"][{POST_SNAPBACK}][/a]

I don't think anyone is contesting that the statistics for shot noise favor bigger pixels, at the pixel level.  What your interpretation of Roger's data doesn't consider is the fact that the pixel does not determine image quality.  *All* the pixels in the image, collectively, do.  Noise at a microscopic level, no matter how intense, averages out to no noise, if you're not looking through a microscope.  Noise statistics, as you are considering them, are only measurements in the z axis, and ignore the x and y axii.
Title: larger sensors
Post by: bjanes on January 04, 2007, 11:02:19 am
Quote
I don't think anyone is contesting that the statistics for shot noise favor bigger pixels, at the pixel level.  What your interpretation of Roger's data doesn't consider is the fact that the pixel does not determine image quality.  *All* the pixels in the image, collectively, do.  Noise at a microscopic level, no matter how intense, averages out to no noise, if you're not looking through a microscope.  Noise statistics, as you are considering them, are only measurements in the z axis, and ignore the x and y axii.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93621\")

By your reasoning the 8MP EOS 1D (8.2 um pixels) and the 7.1MP S70 (2.3 um) should have similar noise characteristics. They do is you are shooting at ISO 100 with the S70 and ISO 1600 with the 1DM2 as [a href=\"http://www.clarkvision.com/imagedetail//does.pixel.size.matter2/]Roger[/url] shows.

Of course, if you took a 1DM2 sensor and increased the pixel count such that the pixel size were 2.3 um, the noise characteristics might be similar at the same print size. However, the primary reason for more MP is to have the ability to print at larger sizes.

Bill
Title: larger sensors
Post by: Ray on January 04, 2007, 05:08:13 pm
Quote
By your reasoning the 8MP EOS 1D (8.2 um pixels) and the 7.1MP S70 (2.3 um) should have similar noise characteristics. They do is you are shooting at ISO 100 with the S70 and ISO 1600 with the 1DM2 as Roger (http://www.clarkvision.com/imagedetail//does.pixel.size.matter2/) shows.

Of course, if you took a 1DM2 sensor and increased the pixel count such that the pixel size were 2.3 um, the noise characteristics might be similar at the same print size. However, the primary reason for more MP is to have the ability to print at larger sizes.

Bill
[a href=\"index.php?act=findpost&pid=93645\"][{POST_SNAPBACK}][/a]
 

If you were to take the very same 2.3 um pixels of the S70 and spread them over a 1D2 sensor, you might well have similar noise characteristics at same size prints, but any professional level camera that attempted such a high density, large sensor would contain a lot more advanced technology than the S70. It might then be reasonable to predict that noise levels in much larger prints would also be less than (or at least equal to) that from the 8 um camera.
Title: larger sensors
Post by: John Sheehy on January 04, 2007, 05:25:47 pm
Quote
If you were to take the very same 2.3 um pixels of the S70 and spread them over a 1D2 sensor, you might well have similar noise characteristics at same size prints, but any professional level camera that attempted such a high density, large sensor would contain a lot more advanced technology than the S70. It might then be reasonable to predict that noise levels in much larger prints would also be less than (or at least equal to) that from the 8 um camera.
[a href=\"index.php?act=findpost&pid=93738\"][{POST_SNAPBACK}][/a]
For shot noise, there can't be any improvement unless quantum efficiency improves.  For read noise, however, all you have to do is have it increase less than the linear resolution does, and it should be less powerful in the image.  IOW, if you squeeze 9x as many pixels into the sensor, as long as the read noise does not increase by 3x, there should be less read noise at the image level.
Title: larger sensors
Post by: Ray on January 04, 2007, 06:01:59 pm
Quote
No, this is a common misconception and is explained at length by Roger Clark on his website:

http://www.clarkvision.com/photoinfo/f-ratio_myth/index.html (http://www.clarkvision.com/photoinfo/f-ratio_myth/index.html)

Other things being equal, for a given f/stop and exposure time, the number of photons per second per square micron arriving in the focal plane will be the same, but the  camera with larger pixels will collect more photons because of its larger area.

Bill
[a href=\"index.php?act=findpost&pid=93609\"][{POST_SNAPBACK}][/a]


I knew it! I take it you didn't see my edit, Bill. I added the last comment for the very specific purpose of avoiding this red herring. I thought I'd made it clear that  I was talking about same size sensors. Clearly a small sensor, with lens at f8, does not receive as much light as a large sensor, with lens at f8 using the same shutter speed. How could it? You don't need to refer me to Roger Clark to get that point.

Here's what I wrote a couple of pages back.

Quote
Of course, it almost goes without saying, if you want increased dynamic range in a system that is mostly limited by photonic noise, it has to be through increased exposure. You can't have increased dynamic range as well as faster shutter speeds. No matter how many pixels are on the sensor, the sensor as a whole receives the same amount of light for a given exposure at a given f stop.

I'll have to edit this in case someone tries to argue that a 200mm lens at f8 lets more light pass for a given exposure than a 50mm lens at f8. I am of course referring to a situation of equal size sensors, ie. equal formats.


This post has been edited by Ray: Yesterday, 12:11 AM
Title: larger sensors
Post by: Ray on January 04, 2007, 06:31:17 pm
Quote
For shot noise, there can't be any improvement unless quantum efficiency improves.  For read noise, however, all you have to do is have it increase less than the linear resolution does, and it should be less powerful in the image.  IOW, if you squeeze 9x as many pixels into the sensor, as long as the read noise does not increase by 3x, there should be less read noise at the image level.
[a href=\"index.php?act=findpost&pid=93742\"][{POST_SNAPBACK}][/a]

That makes sense. There are probably lots ways of making incremental improvements, whether it be in relation to quantum efficiency or read noise, which are currently being explored in the laboratories.

I wouldn't pretend to understand what's possible and what's not. What I might see as an insurmountable obstacle might be no obstacle at all. For example, my simplistic way of viewing pixels (or photodiodes) as 3-dimensional buckets that hold electrons instead of water, might lead me to suppose that 4 small pixels covering the same area as 1 large pixel could not hold the same charge unless each small pixel had the same depth. If the small pixels do have the same depth as the singe large pixel, is this an advantage or disadvantage? Perhaps there's an advantage in having a greater wall area in total, that's exposed to photons.
Title: larger sensors
Post by: bjanes on January 04, 2007, 06:56:38 pm
Quote
For shot noise, there can't be any improvement unless quantum efficiency improves.  For read noise, however, all you have to do is have it increase less than the linear resolution does, and it should be less powerful in the image.  IOW, if you squeeze 9x as many pixels into the sensor, as long as the read noise does not increase by 3x, there should be less read noise at the image level.
[a href=\"index.php?act=findpost&pid=93742\"][{POST_SNAPBACK}][/a]

Well, current Canon sensors have a read noise of about 3-4 electrons (and it does not vary that much with pixel size), and there is not a whole lot of room for improvement. Because of the lower gain associated with small pixels, the effect of read noise is much more prominent with small pixels. IOW, if the gain with a large pixel is 9 electrons per ADU, a noise of one electron will only cause 1/9 ADU of noise. However, if your small pixel has a gain of 1 electron per ADU, 1 electron of noise would cause a change of 1 ADU in the noise. Quantum efficiency could double or treble, but how would you increase the pixel density nine fold without significantly decreasing the gain. The capacitance of silicon is limited and well depth can increase only so much.

Bill
Title: larger sensors
Post by: howiesmith on January 04, 2007, 07:52:54 pm
Quote
Ultimately, Howard, it's up to each of us to decide what makes sense, what's meaningful, what's useful, what's right or wrong. How we decide such matters is determined by our entire world experience, our education, our genetics, our environmnet and everything that has happened to us from the time, and including the time, that we were in the womb.

[a href=\"index.php?act=findpost&pid=93551\"][{POST_SNAPBACK}][/a]

There is an interesting thing about truth Ray.  It doesn't matter one iota whether you believe it or not.  What is true is true, regardless whether it makes sense to you, is meaningful to you, or useful to you.  Or anyone for that matter.  There are likely many truths today that are being overlooked because they lack meaning or use now.  At Newton's time, relativity existed and was as true as it is today.  It just wasn't useful or meaningful.  (Who really cared then that as you approach the speed of light, your mass increases?)  How you decide such matters is determined by your entire experience, education, genetics, environmnet and everything that has happened to you.   However, that does not alter the trueth - just your perception of it.

Scientific method allows very little room for being optimistic or pessimistic.  The data must support your conclusion(s).  That is the purpose of peer review - to assure you are not too optimistic about what you are testing.  I would guess you are not a peer of any of those doing cutting edge physics.  Am I right?
Title: larger sensors
Post by: Ray on January 04, 2007, 10:08:36 pm
Quote
There is an interesting thing about truth Ray.  It doesn't matter one iota whether you believe it or not.  What is true is true, regardless whether it makes sense to you, is meaningful to you, or useful to you.  Or anyone for that matter. 

Well, that could be a very long philosophical debate, Howie. You are describing what you apparently think is a truth about truth. My understanding is, the belief that something is true can have a profound effect on any individual, whether or not it is true in reality. It's sometimes called the placebo effect. Such beliefs can be beneficial or harmful.

I don't think it is reasonable to expect people to concern themselves with matters they consider meaningless and useless to themselves and others. But, if the point you are making is that, what one person thinks is a meaningless or useless activity, another person might not, then of course I agree.

Quote
At Newton's time, relativity existed and was as true as it is today.  It just wasn't useful or meaningful.

I don't agree. Newton would have found Einstein's theories of relativity very meaningful and very useful. He just wasn't able to think of them because the groundwork in other areas of science and mathematics had not been laid.

Quote
Scientific method allows very little room for being optimistic or pessimistic.

Little room for being optimistic or pessimistic about what? You're not making much sense. If a scientist is pessimistic about the efficacy of the scientific method, then maybe he/she should be doing something else, or improve the method if he/she can.

Quote
I would guess you are not a peer of any of those doing cutting edge physics. 

That's right. Are you?
Title: larger sensors
Post by: howiesmith on January 05, 2007, 04:26:57 am
Quote
Well, that could be a very long philosophical debate, Howie. You are describing what you apparently think is a truth about truth. My understanding is, the belief that something is true can have a profound effect on any individual, whether or not it is true in reality. It's sometimes called the placebo effect. Such beliefs can be beneficial or harmful.

I don't think it is reasonable to expect people to concern themselves with matters they consider meaningless and useless to themselves and others. But, if the point you are making is that, what one person thinks is a meaningless or useless activity, another person might not, then of course I agree.

I don't agree. Newton would have found Einstein's theories of relativity very meaningful and very useful. He just wasn't able to think of them because the groundwork in other areas of science and mathematics had not been laid.

Little room for being optimistic or pessimistic about what? You're not making much sense. If a scientist is pessimistic about the efficacy of the scientific method, then maybe he/she should be doing something else, or improve the method if he/she can.

That's right. Are you?
[a href=\"index.php?act=findpost&pid=93783\"][{POST_SNAPBACK}][/a]

I have never claimed the belief that something is true could not have a profound effect on an individual, whether or not it is true in reality.  If you take a sugar pill (placebo), believing it is aspirin, and it cures your headache, that untruth has had a profound effect, enough to cure a headache.  That does not change the reality (truth) that the pill was in fact sugar and not aspirin, no matter how firmly you believed, thought or wished it were aspirin.

I guess we will just disagree about whether Newton would have found relativity very meaningful and very useful.  I think he may have found relativity interesting.  Even today, a very many people do not find relativity meaningful or useful.  But relativity applies to savages as well as physicists.  Relativity didn't spring to life from Einstein's pen.  He merely wrote down his ideas.  He did not change the reality (truth) of how the universe works, just did a better job of describing it.

I was not expressing an opinion about being optimistic or pessimistic about scientific method. merely that the scientific method does not allow a person to insert much optimism or pessimism into the process.  Just interpret the data.  A person can be only as optimistic about a result of the scientific method as the data will support.  Scientific methods might be used to support the placebo effect dicussed above.  Expanding those results to conclude sugar is as effective as aspirin for curing headaches may be a bit too optimistic.

I have performed peer reviews but I am retired now, and no longer involved in that.
Title: larger sensors
Post by: John Sheehy on January 05, 2007, 08:21:46 am
Quote
For example, my simplistic way of viewing pixels (or photodiodes) as 3-dimensional buckets that hold electrons instead of water, might lead me to suppose that 4 small pixels covering the same area as 1 large pixel could not hold the same charge unless each small pixel had the same depth. If the small pixels do have the same depth as the singe large pixel, is this an advantage or disadvantage? Perhaps there's an advantage in having a greater wall area in total, that's exposed to photons.
[a href=\"index.php?act=findpost&pid=93760\"][{POST_SNAPBACK}][/a]
I was under the impression that the wells are actually so thin that even the ones in 10MP P&S cameras were wider than they are high, but I don't know that for fact.  It would be nice to have an accurate chart of various senssors, their pixel dimensions, fill factors, QEs, and electron capacities, to see what is and isn't independent in current technology.
Title: larger sensors
Post by: Ray on January 05, 2007, 08:48:33 am
Quote
I guess we will just disagree about whether Newton would have found relativity very meaningful and very useful.  I think he may have found relativity interesting.

Howie,
I think he would have found it profoundly interesting. A revelation, in fact; particularly the notion that the universe is expanding. The one great enigma in Newton's cosmology, where every body exterts a gravitational force on every other body, is that there was no satisfactory explanation as to why these celestial objects did not eventually crash in on each other. Just a small perturbance would upset the clockwork balance to cause everything to come tumbling down. But it didn't and doesn't. I imagine this would have worried Newton deeply. He was basically stumped and without an explanation.
Title: larger sensors
Post by: Ray on January 05, 2007, 09:58:03 am
Quote
I was under the impression that the wells are actually so thin that even the ones in 10MP P&S cameras were wider than they are high, but I don't know that for fact.  It would be nice to have an accurate chart of various senssors, their pixel dimensions, fill factors, QEs, and electron capacities, to see what is and isn't independent in current technology.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93847\")

John,
Having just done a search on the anatomy of a CMOS imaging device, I came across the following site which does appear to show in its diagrams that the collection area is much wider than it is deep. The article gives a fairly thorough treatment of the processes, but I notice at the foot of the article that it was last modified in July 2004.

The fact that the photodiodes (and photon collection areas) are considerably smaller than the pixel pitch is a limiting factor on quantum efficiency. I expect we could look forward to some improvement there.

[a href=\"http://micro.magnet.fsu.edu/primer/digitalimaging/cmosimagesensors.html]http://micro.magnet.fsu.edu/primer/digital...agesensors.html[/url]
Title: larger sensors
Post by: BJL on January 05, 2007, 02:12:54 pm
How about 33x26mm? At least in the "1D" family, as opposed to the higher priced larger format 1Ds family.

Those weird numbers come from what Canon has indicated is the largest sensor size that can currently be made with standard single exposure fabrication, and so at distinctly lower cost that the multiple exposures that Canon says it needs to use to make 36x24mm format. 36x36 is too long in either direction, needing at least four exposures with a 33x26 single exposure limit, though Kodak keeps making that format.

The extra 2mm of height would probably require a new mirror and viewfinder assembly (or the radical change to EVF) but the frame fits the 35mm format image circle, so current lenses should work fine (except for a possible slight crop with some super-telephotos, if any actually has a tight 36x24 rectangular anti-flare baffle.)

Or if the extra 2mm of height costs too much in redesign, why not 33x24mm, pushing the twin current limits of camera components designed for 36x24 and fab. equipment limited to 33x26?

When choosing a sensor size that imposes a crop on lenses designed for 36x24, as in the 1D models, I see no good reason to impose an additional vertical crop just for the sake of staying with 3:2 shape. If 33x26 is the current fab. limit, 3:2 shape imposes 33x22, but you can get that just as well by cropping from 33x24 or 33x26, and the latter shapes offer a larger frame when less elongated print shapes are desired, like 10x8, 11x8.5, A4, A3, A2 ... (33x26 is almost the same shape as 10x8, 33x24 is close to the ISO A paper shapes and 7x5.)
Title: larger sensors
Post by: BJL on January 05, 2007, 02:29:03 pm
Quote
the Canon 1D M2 outsells the 1DsM2 by a large margin and most users of this type of camera are not cost constrained.
[a href=\"index.php?act=findpost&pid=93089\"][{POST_SNAPBACK}][/a]
Firstly, the 1D MkII has a far higher frame rate, and this rather that high ISO performance might be the major factor why news and sports photographers prefer it. OF course, frame rate is related to pixel count due to constraints on read-out and processing speed, but these limits will likely recede with technological progress.

Also I disagree that price constraints are not a factor: evan a large news organization feels the cost difference between a large collection of $3,500 bodies and a large collection of $7,000 bodies.


And I will repeat my skepticism that moderate pixel count increases in the same sensor format produces significant worsening of visible noise levels in even-handed viewing comparisons. I have not seen a demonstration that a sensor of the same size and technology with more, smaller photo-sites gives significantly worse visible noise when one uses the same ISO, same degree of enlargement, same viewing distance. Lab. measurements of per pixel noise and 100% on-screen viewing effectively compare on the basis of a higher degree of enlargement from the higher resolution sensor, and/or by cropping the image from the higher resolution sensor to the pixel count of the lower resolution sensor. You would get an increase in visible noise by using a greater degree of enlargement on the same file!
Title: larger sensors
Post by: bjanes on January 05, 2007, 02:42:45 pm
Quote
And I will repeat my skepticism that moderate pixel count increases in the same sensor format produces significant worsening of visible noise levels in even-handed viewing comparisons. I have not seen a demonstration that a sensor of the same size and technology with more, smaller photo-sites gives significantly worse visible noise when one uses the same ISO, same degree of enlargement, same viewing distance. Lab. measurements of per pixel noise and 100% on-screen viewing effectively compare on the basis of a higher degree of enlargement from the higher resolution sensor, and/or by cropping the image from the higher resolution sensor to the pixel count of the lower resolution sensor. You would get an increase in visible noise by using a greater degree of enlargement on the same file!
[a href=\"index.php?act=findpost&pid=93913\"][{POST_SNAPBACK}][/a]

You must use Canon cameras. In the Nikon line all you have to do is look at is the D70 vs the D200. The D70 has lower noise and better high ISO performance. I have observed this personally and comparisons are summarized here: URL=http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/]Clarkvision[/URL]
Title: larger sensors
Post by: BJL on January 05, 2007, 05:25:50 pm
Quote
You must use Canon cameras. In the Nikon line all you have to do is look at is the D70 vs the D200. [a href=\"index.php?act=findpost&pid=93916\"][{POST_SNAPBACK}][/a]
As far as I can tell, Clark is doing per pixel S/N ratio comparisons, which I explicitly rejected as not directly relevant to image quality in prints at equal degree of enlargement.

As an extreme example, black and white film uses "pixels" in the form of silver halide crystals with atrociously low S/N ratio and DR, as each outputs either pure black or pure white. The same with B&W prints, which are an ugly scattering of pure black dots on a pure white background if viewed microscopically, "at 100% pixels".

But printing billions of these chemical pixels at high "PPI" densities and viewing from an appropriate distance produces far better performance due to the "dithering" or "half toning" effect.


(By the way, I use two Olympus cameras, a C-2040 and an E-1, and a Canon film camera.)
Title: larger sensors
Post by: John Sheehy on January 05, 2007, 05:59:20 pm
Quote
Well, current Canon sensors have a read noise of about 3-4 electrons (and it does not vary that much with pixel size), and there is not a whole lot of room for improvement.

I think 1 electron read noise would be a great improvement.  0.1, even greater.

Someday, technology may count photon hits with a digital counter, and there won't be any read noise at all.  The sensors in current digital cameras aren't digital; they are the analog sensors of digital cameras.  Only the ADC stage, processing, and output, are digital.

Quote
Because of the lower gain associated with small pixels,

Smaller pixels have greater gain, AOTBE, for the same ISO.  If you're thinking of ADU/electrons as "gain"; that really stretches the definition of gain.  Gain, AFAIU, is a black box.  I don't know what voltages the ADC is looking for, so I don't know the actual gain.  What I do know is that for the same camera, the gain is proportional to the ISO setting, unless the camera design uses arithmetic to achieve some ISOs.

Quote
the effect of read noise is much more prominent with small pixels. IOW, if the gain with a large pixel is 1 electron per ADU, a noise of one electron will only cause 1 ADU of noise.  However, if your small pixel has a gain of 9 electrons per ADU,

That must be backward.  How could a small pixel have about 9*4095 electrons at saturation, and a large pixel have about 1*4095 electrons at saturation at the same ISO?

Quote
1 electron of noise would cause a change of 9 ADUs in the noise.

OK, I now see you had a typo in the previous sentence.

Quote
Quantum efficiency could double or treble, but how would you increase the pixel density nine fold without significantly decreasing the gain. The capacitance of silicon is limited and well depth can increase only so much.
[a href=\"index.php?act=findpost&pid=93763\"][{POST_SNAPBACK}][/a]

Read noise is the issue, and it is not proportional to absolute gain.  Read noise is *NOT* the amplification of existing noise in the sensor wells; it is noise *GENERATED* in the reading of the sensor.  That's why it is different as enumerated in electrons, at different ISOs in Roger's experiments, and in mine.

From some of the statements you have made, you seem to think that read noise is a quantum event, like the captured electrons.  When someone says"the read noise at ISO 100 is 30.1 electrons", this doesn't mean each pixel is off by some integer number of electrons, the standard deviation of which is 30.1 electrons for the entire image.  It means that the read noise was measured in ADUs, and then on assumed information about the relationship between ADUs and electrons for that ISO, the ADUs are translated into units of "electrons".  This figure has nothing at all to really do with sensor electrons.

P&S cameras already have nine-fold, compared to DSLRs, and they handle this small pixel thing very well, and would probably be even better with more expensive readout circuitry for a super-MP DSLR.  Look at how a Canon 10D and a Sony F707 compare with the same focal length lens (45mm), from the same distance:

(http://www.pbase.com/jps_photo/image/69924817/original.jpg)


A 60MP DSLR could have better microlenses and readout circuitry than a P&S, I would imagine.
Title: larger sensors
Post by: bjanes on January 06, 2007, 01:44:52 pm
Quote
Smaller pixels have greater gain, AOTBE, for the same ISO. 
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93965\")

No, John, you have it backwards. For CCD and CMOS imaging chips, [a href=\"http://www.photomet.com/library_enc_gain.shtml]gain[/url] is reported in terms of electrons/ADU.  A gain of 8 means that the camera system digitizes the CCD signal so that each ADU corresponds to 8 photoelectrons. This is the inverse of amplifier gain, and can be confusing.

A pixel with half the linear size of the above example might have a gain of 2 electrons/ADU. If both chips had a read noise of 4 electrons, the noise would be 0.5 ADU with the larger chip and 2 ADU for the smaller pixel.

I did have some typos in my previous message, which I have corrected, but the thrust of my assertion that a given read noise in electrons has a greater effect with a small pixel is still true.

Bill
Title: larger sensors
Post by: bjanes on January 06, 2007, 06:29:45 pm
Quote
I think 1 electron read noise would be a great improvement.  0.1, even greater.

OK, I now see you had a typo in the previous sentence.
Read noise is the issue, and it is not proportional to absolute gain.  Read noise is *NOT* the amplification of existing noise in the sensor wells; it is noise *GENERATED* in the reading of the sensor.  That's why it is different as enumerated in electrons, at different ISOs in Roger's experiments, and in mine.

From some of the statements you have made, you seem to think that read noise is a quantum event, like the captured electrons.  When someone says"the read noise at ISO 100 is 30.1 electrons", this doesn't mean each pixel is off by some integer number of electrons, the standard deviation of which is 30.1 electrons for the entire image.  It means that the read noise was measured in ADUs, and then on assumed information about the relationship between ADUs and electrons for that ISO, the ADUs are translated into units of "electrons".  This figure has nothing at all to really do with sensor electrons and one can calculate the gain in electrons per ADU.

[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=93965\")

Read noise, like other forms of noise, can be expressed in terms of ADUs or electrons and it is expressed in terms per pixel, not for the entire picture as you seem to imply. Interested readers can refer to this [a href=\"http://www.qsimaging.com/ccd_noise.html]reference[/url], a portion of which is quoted below:

"CCD manufacturers measure and report CCD noise as a number of electrons RMS (Root Mean Square).  You’ll typically see it presented like this, 15eˉ RMS, meaning that with this CCD, you should expect to see about 15 electrons of noise per pixel.  More precisely, 15eˉ RMS is the standard deviation around the mean pixel value."

Nowhere did I state that read noise was some type of constant or offset and not related to ISO. The 3-4 electron value to which I referred is for the ISO at unity gain. At base ISO under normal photographic conditions, noise is shot limited, not read limited. The read noise of 30 electrons you quoted at ISO 100 might be related to a full well value of 80,000 electrons, and is not significant except in the deepest shadows. With a full well of 80,000 electrons, the shadows 10 f/stops below clipping would still have 78 electrons.

Read noise is normally determined by subtracting two bias frames as described in the above reference or in Roger's essay. The bias frames could include the whole picture, but one normally measures a representative cropped area from the center, perhaps an area of 256 by 256 pixels. The results can be shown as a histogram or expressed as a SD. If the sensor is completely uniform, one could measure a single pixel 65,536 times. One can easily convert from ADUs to electrons, since the number of electrons is the square of the signal to noise ratio as Roger explains.

The main source of Read Noise (http://www.photomet.com/library_enc_signal.shtml) is from the on chip pre-amplifier.

Contrary to what you imply, the number of electrons collected is not constant for a given luminance, but follows a Poisson distribution.

Bill
Title: larger sensors
Post by: Ray on January 06, 2007, 08:19:45 pm
Bill,
A lot of readers are probably not going to understand the significance of these differences of opinion between you and John, but probably do appreciate the fact that read noise, and noise in general, as a proportion of the total signal, will be greater for the smaller pixel, all else being equal.

However, all else is rarely equal. Developments in one area can compensate for deficiencies in another area. I see no reason to suppose there is no further scope for improvement with regard to (1) actual pixel size, as opposed to pixel pitch, (2) read noise as a proportion of the signal.

For example, is there any ultimate technological impossibility of having all the photon-collecting receptors on one side of the sensor, and all the signal processors for each photodiode on the other side of the sensor, directly behind, just a few microns away? Maybe such a chip would be too expensive to manufacture. There might be lots of reasons why such a solution would not be practical. I wouldn't know.

It would be interesting to see a direct comparison between a 1Ds and a 400D, cropping the 1Ds image to 10mp and taking both shots from an appropriately different distance to keep the FoV identical and/or using lenses with equal MTF50 responses at resolutions corresponding to the resolution limits of the 2 sensors. (That is, the lens used with the 1Ds should have a MTF50 response at, say 50 lp/mm and the lens used with the 400D should have 50%MTF at around 70 lp/mm .)

We could then get a clearer idea as to how a modern 'small' pixel compares with a slightly older 'bigger' pixel.  
Title: larger sensors
Post by: John Sheehy on January 07, 2007, 08:31:26 am
Quote
No, John, you have it backwards. For CCD and CMOS imaging chips, gain (http://www.photomet.com/library_enc_gain.shtml) is reported in terms of electrons/ADU.  A gain of 8 means that the camera system digitizes the CCD signal so that each ADU corresponds to 8 photoelectrons. This is the inverse of amplifier gain, and can be confusing.
If that terminology is used like that somewhere, then it *is* confusing, and a person with an interest in meaningful terminology should take no part in propagating such terminology.  Gain at a loss and with no "gain" - what a concept.

Quote
A pixel with half the linear size of the above example might have a gain of 2 electrons/ADU. If both chips had a read noise of 4 electrons, the noise would be 0.5 ADU with the larger chip and 2 ADU for the smaller pixel.
What are the actual results?; that's what matters.  Does squeezing 9x as many pixels into the same area more than triple read noise levels?  Squeezing 4x more than doubles it?  Your examples are all hypothetical, and don't answer the question.

Quote
I did have some typos in my previous message, which I have corrected, but the thrust of my assertion that a given read noise in electrons has a greater effect with a small pixel is still true.[a href=\"index.php?act=findpost&pid=94163\"][{POST_SNAPBACK}][/a]
The read noise *IS* the effect.  It exists relative to absolute signal, and it exists relative to the DR of the digitization.  What are its measurements?
Title: larger sensors
Post by: bjanes on January 07, 2007, 09:29:01 am
Quote
If that terminology is used like that somewhere, then it *is* confusing, and a person with an interest in meaningful terminology should take no part in propagating such terminology.  Gain at a loss and with no "gain" - what a concept.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94292\")
All I can say here is that I did not originate the terminology. However, communication is enhanced when you use accepted terminology rather than making up your own definitions. When Rome do as the Romans do.
Quote
What are the actual results?; that's what matters.  Does squeezing 9x as many pixels into the same area more than triple read noise levels?  Squeezing 4x more than doubles it?  Your examples are all hypothetical, and don't answer the question.
[a href=\"index.php?act=findpost&pid=94292\"][{POST_SNAPBACK}][/a]
Look at Rogers [a href=\"http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/]Figure 3[/url]. You will see that read noise is not correlated well with pixel size and in current Canon cameras is about 3-4 electrons with small and large pixels. If you squeeze more pixels into a given area, read noise remains more or less constant per pixel, but the effect of this noise is worse with the resultant small pixels because of the effect of gain when converting from electrons to ADU. One way to circumvent this problem is via binning where a group of pixels is readout as one superpixel, and where the read noise for the superpixel is the same as for the individual pixel. This ability is usually present only on scientific systems. For interested readers, more information is given here. (http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html)
 
Some readers may find the signal/noise calculator on the Nikon site to be helpful. Integration time is exposure and one can adapt the concept to demonstrate statistics for the zones in the image. The question is answered to my satisfaction by the above references.

Bill
Title: larger sensors
Post by: John Sheehy on January 08, 2007, 01:21:14 am
Quote
All I can say here is that I did not originate the terminology. However, communication is enhanced when you use accepted terminology rather than making up your own definitions. When Rome do as the Romans do.

Burn it?

I have never heard anyone but you use terminology like that.  YOU are the propagator, AFAIC.

Quote
Look at Rogers Figure 3 (http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/). You will see that read noise is not correlated well with pixel size and in current Canon cameras is about 3-4 electrons with small and large pixels.

I don't see any charts or ways to derive this information on that page.  The real information of value, for my interests, is not given.

That chart doesn't tell what ISO this is determined for.  I'd be more interested in ADUs than electrons, anyway, but whatever the unit, give the range.  Blackframe read noise at every ISO, at the very least, for all cameras concerned, and how many effective RAW levels from black to saturation, how many electrons for the same, etc, etc.  I can't work with Roger's data.  From what I see, he ignores a lot of important things.  He seems to believe (and bases calculations on  the assumption) that all cameras use 4096 RAW levels, for instance, which is not true, and whitepoint is different, too.

Quote
If you squeeze more pixels into a given area, read noise remains more or less constant per pixel, but the effect of this noise is worse with the resultant small pixels because of the effect of gain when converting from electrons to ADU.

Get me the read noise in a useable form.  Roger's "electrons" are without a useful context, IMO.

Quote
One way to circumvent this problem is via binning where a group of pixels is readout as one superpixel, and where the read noise for the superpixel is the same as for the individual pixel. This ability is usually present only on scientific systems.[a href=\"index.php?act=findpost&pid=94302\"][{POST_SNAPBACK}][/a]

Your eyes do similar, automatically, when you view an ultra-hires image.  There is no need to bin.  You only see increased noise from smaller pixels when you zoom in to the same *PIXEL* resolution.  When you let your eyes do it, you keep the detail.
Title: larger sensors
Post by: eronald on January 08, 2007, 07:02:38 am
I don't know why, but Roger's article looks a bit out of date. Cherry pick the terminology, ignore the content ...

Edmund
Title: larger sensors
Post by: bjanes on January 08, 2007, 07:55:15 am
Quote
I have never heard anyone but you use terminology like that.  YOU are the propagator, AFAIC.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94463\")

Apparently you don't read the references that are given to you.

[a href=\"http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html]Clark[/url]. Refer to Table 1a and the accompanying text. (gain is in electrons/ADU)

University Paper (http://spiff.rit.edu/classes/phys559/lectures/gain/gain.html) gain = #electrons per pixel/ # counts per pixel (ADU)

Roper Scientific paper (http://www.photomet.com/library_enc_gain.shtml) gain = electrons / ADU

Quote
That chart doesn't tell what ISO this is determined for.  I'd be more interested in ADUs than electrons, anyway, but whatever the unit, give the range.  Blackframe read noise at every ISO, at the very least, for all cameras concerned, and how many effective RAW levels from black to saturation, how many electrons for the same, etc, etc.  I can't work with Roger's data.  From what I see, he ignores a lot of important things.  He seems to believe (and bases calculations on  the assumption) that all cameras use 4096 RAW levels, for instance, which is not true, and whitepoint is different, too.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94463\")

If you look at any spec sheet for a sensor you will see a single read noise in electrons RMS. Since read noise is dependent on readout rate, that value is also given, but for most digital cameras used in routine photography the rate is fixed. The read noise is for unity gain.

[a href=\"http://www.kodak.com/ezpres/business/ccd/global/plugins/acrobat/en/productsummary/FullFrame/KAF-39000ProductSummaryRev.2.0.pdf]Kodak KAF 3900[/url]
Quote
Your eyes do similar, automatically, when you view an ultra-hires image.  There is no need to bin.  You only see increased noise from smaller pixels when you zoom in to the same *PIXEL* resolution.  When you let your eyes do it, you keep the detail.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94463\")

You have completely ignored the principle behind binning. The pixels have to be binned into one super pixel prior to readout. In that case, the read noise for the super pixel is the same as the individual pixel. Your eyes do not do this automatically.

[a href=\"http://www.photomet.com/library_enc_binning.shtml]Roper Scientific[/url] explains these principles.

I formerly thought that you were quite knowledgeable, but you are beginning to lose credibility with my due to your outrageous statements not backed up by any references or data.
Title: larger sensors
Post by: bjanes on January 08, 2007, 08:07:20 am
Quote
I don't know why, but Roger's article looks a bit out of date. Cherry pick the terminology, ignore the content ...

Edmund
[a href=\"index.php?act=findpost&pid=94488\"][{POST_SNAPBACK}][/a]

Edmund,

Who is doing the cherry picking and ignoring content here. Your post is not clear. BTW,
Roger has just updated his sensor analysis as of December 2006, and like an academic, he supplies references.

Bill
Title: larger sensors
Post by: John Sheehy on January 08, 2007, 09:35:56 am
Quote
Apparently you don't read the references that are given to you.
Sometimes I do and sometimes I don't, but my experience has been that most of your references do not address what I thought we were talking about, and if you think they do, then I don't think you understand them or what we were talking about (and on top of that, they may just be wrong).  A URL only shows what someone else thinks.

What we were talking about here is the fact that statistical noise is not *VISIBLE* noise.  I, and others said that more and smaller pixels in the same size sensor does not result in a noisier image, necessarily.  You seemed to disagreed with it, but never did anything to discredit the idea that had any relevance.

Quote
You have completely ignored the principle behind binning.  The pixels have to be binned into one super pixel prior to readout. In that case, the read noise for the super pixel is the same as the individual pixel. Your eyes do not do this automatically.
Not exactly, but I purposely used the word "similar", in reference to the reduction of visible noise.

Frankly,  I would think that any such system will work better in theory than practice.  Do you have an example of a sensor that does this and has truly lower read noise relative to signal than would be had with software binning?  Downsampling and software binning reduce read noise quite a bit, too.  I think it is generally better to leave the image in its higher-res state.  Besides having more visible detail, this allows future overlapped binning at the original resolution minus n-1 pixels in each dimension.  The binnable neighbors for a CFA camera will be very far from each other, as well.

Quote
I formerly thought that you were quite knowledgeable, but you are beginning to lose credibility with my due to your outrageous statements not backed up by any references or data.
[a href=\"index.php?act=findpost&pid=94492\"][{POST_SNAPBACK}][/a]
What outrageous statements?  I remember saying that noise is relevant to image viewing not just in statistical intensity, but in frequency content relative to the image.  Then, you start throwing URLs at me that completely ignore the spatial aspects of noise, like Bugs Bunny throwing banana peels behind him while being pursued, or someone tossing smokebombs.  Now, you've gone on tangents like some poorly-thought-out use of the word "gain", as if that had any bearing on anything being discussed.  Now, hardware binning.  Let's get back to what we were talking about; the idea that microscopic noise is virtually lower-noise.  Can you find some way to disprove that?
Title: larger sensors
Post by: Ray on January 08, 2007, 09:55:37 am
Quote
The pixels have to be binned into one super pixel prior to readout. In that case, the read noise for the super pixel is the same as the individual pixel. Your eyes do not do this automatically.

Roper Scientific (http://www.photomet.com/library_enc_binning.shtml) explains these principles.
[a href=\"index.php?act=findpost&pid=94492\"][{POST_SNAPBACK}][/a]

Bill,
I read Roper Scientific's articles years ago when they were one of the few sources on the net explaining the basic principles of imaging devices. I think you have to read between the lines sometimes. This is what they actually wrote on the issue of binning.

Quote
However, in binning mode, read noise is added to each superpixel, which has the combined signal from multiple pixels. In the ideal case, this produces SNR improvement equal to the binning factors (4x in the above example).

I recall BJL commented on this a while back. In practice, the read-noise of the superpixel is somewhat greater than the read-noise of a single small pixel, but of course not as great as the sum of the read-noise of all the individual pixels before they were binned.
Title: larger sensors
Post by: John Sheehy on January 08, 2007, 10:25:43 am
Quote
I recall BJL commented on this a while back. In practice, the read-noise of the superpixel is somewhat greater than the read-noise of a single small pixel, but of course not as great as the sum of the read-noise of all the individual pixels before they were binned.
[a href=\"index.php?act=findpost&pid=94511\"][{POST_SNAPBACK}][/a]
...and you don't even get the full sum in a software binning Or even a downsample).  For a software binning, noise is reduced to 1/n, where n is the linear binning factor (2, where a 2x2 tile is binned into one).
Title: larger sensors
Post by: jwoolf on January 08, 2007, 11:02:57 am
Rainer,

There is a new, exciting technology from Seitz and Dalsa which is just coming to market.  The D3 scanning back uses a 60mm linear array that is 7,500 pixels high.  It is 100 times faster and more sensative than the standard scan back technology.  It is full frame medium format.  Approximately 6cm x 7cm.  About 70 megapixels!!!!  

Here is link to info:

http://www.roundshot.ch/xml_1/internet/de/...8/d925/f931.cfm (http://www.roundshot.ch/xml_1/internet/de/application/d438/d925/f931.cfm)

John Woolf
Digital Systems Manager
Museum of Fine Arts
Boston/USA
Title: larger sensors
Post by: bjanes on January 08, 2007, 11:10:11 am
Quote
Bill,
I read Roper Scientific's articles years ago when they were one of the few sources on the net explaining the basic principles of imaging devices. I think you have to read between the lines sometimes. This is what they actually wrote on the issue of binning.
I recall BJL commented on this a while back. In practice, the read-noise of the superpixel is somewhat greater than the read-noise of a single small pixel, but of course not as great as the sum of the read-noise of all the individual pixels before they were binned.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94511\")

Ray,

If BJL has some additional information then I would like to see it and his references. As far as the Roper Scientific article goes, I don't think that the physics of CCD has changed since then, and as you frequently point out technology is improving.  At any rate, binning can not be done with full advantage with respect to read noise after the fact so far as I know, since the read has already occurred. Of course, the full well is effectively increased.

[a href=\"http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html]Nikon Java Calculator[/url] also gives an interactive calculator where the parameters under discussion can be shown in real time.

Bill
Title: larger sensors
Post by: bjanes on January 08, 2007, 11:15:27 am
Quote
Sometimes I do and sometimes I don't, but my experience has been that most of your references do not address what I thought we were talking about, and if you think they do, then I don't think you understand them or what we were talking about (and on top of that, they may just be wrong).  A URL only shows what someone else thinks.

What we were talking about here is the fact that statistical noise is not *VISIBLE* noise.  I, and others said that more and smaller pixels in the same size sensor does not result in a noisier image, necessarily.  You seemed to disagreed with it, but never did anything to discredit the idea that had any relevance.

Not exactly, but I purposely used the word "similar", in reference to the reduction of visible noise.

Frankly,  I would think that any such system will work better in theory than practice.  Do you have an example of a sensor that does this and has truly lower read noise relative to signal than would be had with software binning?  Downsampling and software binning reduce read noise quite a bit, too.  I think it is generally better to leave the image in its higher-res state.  Besides having more visible detail, this allows future overlapped binning at the original resolution minus n-1 pixels in each dimension.  The binnable neighbors for a CFA camera will be very far from each other, as well.

What outrageous statements?  I remember saying that noise is relevant to image viewing not just in statistical intensity, but in frequency content relative to the image.  Then, you start throwing URLs at me that completely ignore the spatial aspects of noise, like Bugs Bunny throwing banana peels behind him while being pursued, or someone tossing smokebombs.  Now, you've gone on tangents like some poorly-thought-out use of the word "gain", as if that had any bearing on anything being discussed.  Now, hardware binning.  Let's get back to what we were talking about; the idea that microscopic noise is virtually lower-noise.  Can you find some way to disprove that?
[a href=\"index.php?act=findpost&pid=94506\"][{POST_SNAPBACK}][/a]
 
As to what we were talking about, only you seem to know.

So far, you are all bluster but no facts, only quotations from yourself. I don't find this discussion useful at this point.

Bill
Title: larger sensors
Post by: eronald on January 08, 2007, 11:43:20 am
Quote
Edmund,

Who is doing the cherry picking and ignoring content here. Your post is not clear. BTW,
Roger has just updated his sensor analysis as of December 2006, and like an academic, he supplies references.

Bill
[a href=\"index.php?act=findpost&pid=94493\"][{POST_SNAPBACK}][/a]

I meant one should use Roger's paper to set the terminology, but that I am not quite so sanguine about the usefulness of the content to the present audience viz. its applicability to CMOS sensors such as those used by Canon and possibly soon to be used by Dalsa and Kodak.

Edmund
Title: larger sensors
Post by: Ray on January 08, 2007, 06:04:26 pm
Quote
I meant one should use Roger's paper to set the terminology, but that I am not quite so sanguine about the usefulness of the content to the present audience viz. its applicability to CMOS sensors such as those used by Canon and possibly soon to be used by Dalsa and Kodak.

Edmund
[a href=\"index.php?act=findpost&pid=94536\"][{POST_SNAPBACK}][/a]

That's a good point. So many explanations on the net, on these matters, refer to CCD sensors. We make an erroneous assumption if we think everything applies equally to CMOS sensors. Just what principles are common to both types of sensors is not clear to me, but I wouldn't be surprised if the advantages of binning are greater with a CCD design than a CMOS design where each photodiode has its own, personal, analog preamplifier.
Title: larger sensors
Post by: John Sheehy on January 08, 2007, 06:11:07 pm
Quote

As to what we were talking about, only you seem to know.

I said quite clearly what the issue was - that smaller pixels, with higher statistical noise, does not necessarily mean that the image itself is noisier, if there are more of them (they fill the same sensor space).  To this you objected, with links to Roger's page, where he fails to discuss the significance of a pixel in an entire image.  I am very interested in pixel statistics, but I also realize that the quality of pixels does not necessarily affect the image the same way.  Your link to Roger's page had nothing of value to contribute to the argument; *EVERYONE* here involved in this discussion is aware that smaller pixels have more shot noise, and usually more read noise, relative to signal, in current cameras.  That is not the issue.  The issue is what implications the greater per-pixel noise has for the entire image, which has smaller, but more numerous pixels.

Quote
So far, you are all bluster but no facts, only quotations from yourself. I don't find this discussion useful at this point.
[a href=\"index.php?act=findpost&pid=94530\"][{POST_SNAPBACK}][/a]

Be specific.  What did I say, and why do you find it unlikely.  IN YOUR OWN WORDS, not in Roger's.

To this point, you have still totally eluded discussion of the pixel's role in the image as a whole, concerning noise.
Title: larger sensors
Post by: John Sheehy on January 08, 2007, 06:14:56 pm
Quote
That's a good point. So many explanations on the net, on these matters, refer to CCD sensors. We make an erroneous assumption if we think everything applies equally to CMOS sensors. Just what principles are common to both types of sensors is not clear to me, but I wouldn't be surprised if the advantages of binning are greater with a CCD design than a CMOS design where each photodiode has its own, personal, analog preamplifier.
[a href=\"index.php?act=findpost&pid=94638\"][{POST_SNAPBACK}][/a]

Not only that, but the camera adds more complications to the sensor.  We don't use sensors; we use cameras, which use sensors.  Any camera will have more read noise than occurs on-chip, unless the image is digitized right on the sensor chip; then total read noise could be part of the sensor spec.
Title: larger sensors
Post by: bjanes on January 08, 2007, 06:39:32 pm
Quote
I meant one should use Roger's paper to set the terminology, but that I am not quite so sanguine about the usefulness of the content to the present audience viz. its applicability to CMOS sensors such as those used by Canon and possibly soon to be used by Dalsa and Kodak.

Edmund
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94536\")

Kodak and Dalsa have been making CMOS for quite some time. In fact, DALSA founder and CEO Dr. Savvas Chamberlain was a pioneer in developing both technologies. The Dalsa web site has a good comparison of the two technologies and a long list of refereces.

[a href=\"http://vfm.dalsa.com/products/CCD_vs_CMOS.asp]CCD vs CMOS[/url]

You are correct that CMOS and CCD are not interchangeable. Roger uses Canon and most of his tests are on Canon CMOS sensors, so his references are applicable there. Roger also participates in NASA imaging projects and probably knows more about the subject than anyone participating in this thread, even though John criticizes his work and has apparently not published his own research. Roger makes no mention of binning, which is usually used with high end scientific CCD sensors and I do not even know if binning is available with CMOS

I am admittedly an amateur enthusiast and would appreciate any authoritative information on this subject that anyone can contribute, but I will listen most attentively to recognized authorities in the area.

Bill
Title: larger sensors
Post by: Ray on January 08, 2007, 07:41:46 pm
Just for fun, I sometimes like to speculate on what might be possible as processing chips become faster and more powerful and buffer sizes grow.  

The point has often been made, by Michael as well as by BJL a few pages ago in this thread, that the behaviour of silver halide particles in B&W film photography is pretty close to the concept of a true, all-digital sensor.

These noise issues, which are getting really contentious in this thread   , are largely due to the fact that sensors are fundamentally analog devices with a lot of digital processing attached.

Would it ever be possible, I wonder to build a truly digital sensor. In such a sensor, we'd only be concerned with whether or not a photon collector had received sufficient light to be 'turned on'. For color photography, such a sensor would probably be a Foveon type and the resulting image would consist of 'real' pixels, each consisting of a red, green or blue element that was either switched 'on' or 'off'.

In such a design, it might even be possible to deal with photonic shot noise. For example, if a stray "red' pixel element, way smaller than the resolution limits of the lens, is not switched 'on' in a cluster of red pixels that are switched on and that cluster is within the resolution limits of the lens, some analyzing algorithm could work out that such a pixel did not receive enough photons to be swithed on (due to photonic shot noise), and switch that pixel on, thus reducing the effects of photonic shot noise. The reverse would also take place, ie. a cluster of photodiodes all switched off, bar one or 2 'red' pixel elements that received a slightly greater number of photons than their neighnours, would be switch off.

Needless to say, the numbers involved in such a design would be astronomical and the processing power required would be enormous. Initially, we might have to return to the tethered system.

By my calculations, a full frame 6cmx4.5cm sensor (which even BJL thinks might becomes a reality   ) would hold around 2 gigapixels of 2 micron pixel pitch (that is, using the sloppy definition of the term where a 3.3mp Foveon sensor is often described as having 10mp).
Title: larger sensors
Post by: Ray on January 08, 2007, 08:40:00 pm
This might well be a completely 'screwy' idea. It's partly tongue-in-cheek. Considering that one analog pixel can have 16.7 million meaningful (?) values, 2 billion different values for the entire image seems woefully inadequate.

However, I vaguely recall reading research that had analyzed a number of real-world images that had found that the actual number of different pixel values, even in a hi res image, is no where near the 16.7m mark. We're talking about numbers of thousands rather than millions.
Title: larger sensors
Post by: Ray on January 10, 2007, 07:46:25 pm
Well, I didn't intend to kill off the entire thread. I'm surprised that none of you 'techies' have attempted to shoot down the idea in flames.

Whilst taking my evening exercise yesterday, to stave off the boredom and keep my mind active, I tried to work out how many combinations of on/off states there are in a pixel of 3 primary colors. I realised with some dismay that my maths is so poor, I had difficulty in working this out, whilst slowly jogging along the road. Is it 6 or possibly 9?

When I returned from my exercise, I got out pen and paper and arrived at a figure of 8. (Is this correct?)

Of course, our full frame 6x4.5cm sensor with 2 gigapixel elements (667m real pixels) is way beyond the MTF50 resolution limit of MF lenses. The processing, in-camera or out-camera, would group such pixels into a cluster of say 9, which would give us around 74 megapixels of 6 micron pixel pitch, each one being pixel sharp.

According to my maths, the number of possible values of a cluster of 9 'real' Foveon type pixels, with each individual pixel having a possible number of 8 different values, is given by 8 to the power of 9, ie 134 million, somewhat better than the 16.7 million we currently use when printing.

Apart from the number crunching difficulty, I can't see any major flaws in such a design. We've already got 2 micron pixels in P&S cameras. To make the processing task more realistic, we could consider the number of 2 micron detectors that would fit on a full frame Foveon type 35mm sensor. That's around 630m, which equates to 210m real pixels with a red, green and blue elements. Group 9 of those into a cluster and we get a 23mp FF 35mm sensor, which is close to the next generation of 35mm DSLRs.

However, such a truly digital sensor, from the ground up, would be far better than a 23mp analog Foveon sensor, which struggles with noise, dynamic range, cross-talk and aliasing etc. In such an all-digital design, all that's required for total pixel sharpness is that the light falling on any individual detector should be greater than the noise. Noise 49%, signal 51% results in a perfect, noise-free rendition.

Should I be making my way to the patents office?  
Title: larger sensors
Post by: John Sheehy on January 11, 2007, 09:04:36 am
Quote
In such a design, it might even be possible to deal with photonic shot noise.

Capturing more photons is the only way to "deal with" shot noise.  Read noise is what needs to be dealt with.

Quote
For example, if a stray "red' pixel element, way smaller than the resolution limits of the lens, is not switched 'on' in a cluster of red pixels that are switched on and that cluster is within the resolution limits of the lens, some analyzing algorithm could work out that such a pixel did not receive enough photons to be swithed on (due to photonic shot noise), and switch that pixel on, thus reducing the effects of photonic shot noise. The reverse would also take place, ie. a cluster of photodiodes all switched off, bar one or 2 'red' pixel elements that received a slightly greater number of photons than their neighnours, would be switch off.[a href=\"index.php?act=findpost&pid=94661\"][{POST_SNAPBACK}][/a]


The result would be a deterioration of the original capture.  There's nothing wrong with original photonic capture that needs to be fixed.  Any isolated photon or lack thereof is statistically more likely to be accurate than a "fixed" pixel is.  Also, if think about it, any regular pattern that is slightly broken would ask for fixing, too.  Your scenario only occurs at the clipping point, and black, in fact.  You need a mixture of photons and holes in a great variety to have any tonality.  Shot noise is not a noise in the way noise is normally thought of; it is not some unwanted outside invasion; it is a symptom of the limited nature of the signal itself.
Title: larger sensors
Post by: Ray on January 11, 2007, 09:36:13 am
Quote
Your scenario only occurs at the clipping point, and black, in fact.  You need a mixture of photons and holes in a great variety to have any tonality.  Shot noise is not a noise in the way noise is normally thought of; it is not some unwanted outside invasion; it is a symptom of the limited nature of the signal itself.
[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

But that's 'analog think', John   . In my all-digital system, there will be lots of 'cliff edges'; instances where noise is equal to the signal. There are no half measures. A pixel element is either switched on for perfect, noise-free sharpness, or it's black. You don't get a microscopic black speck on a red flower petal that an ordinary camera lens can pick up. An algorithm should be able to work out, 'Hey, that black speck shouldn't be there', and turn the pixel on.

On the other hand, if there was a cluster of black specks, the algorithm would let them be.
Title: larger sensors
Post by: Ray on January 11, 2007, 10:17:16 am
Quote
Any isolated photon or lack thereof is statistically more likely to be accurate than a "fixed" pixel is.  Also, if think about it, any regular pattern that is slightly broken would ask for fixing, too.  Your scenario only occurs at the clipping point, and black, in fact.  You need a mixture of photons and holes in a great variety to have any tonality.  Shot noise is not a noise in the way noise is normally thought of; it is not some unwanted outside invasion; it is a symptom of the limited nature of the signal itself.
[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

I'm not referring to the finished, processed pixel that appears on the monitor after downloading the RAW image, but to the 27 (or so) pixel elements (each 2 microns) that comprise the one 6 micron (or so) pixel. The tonality of each processed (finished) pixel is achieved through the combination of those 27 on/off values. Any isolated, single, pixel element that's switched off in a group of 'ons' is clearly due to noise and could be fixed.

Patterns would be treated similarly if they consisted of single pixel elements.

I'm sure there's a huge flaw in my reasoning but I just can't see it yet   .
Title: larger sensors
Post by: BJL on January 11, 2007, 10:27:20 am
Quote
If BJL has some additional information then I would like to see it and his references.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=94529\")
Here is one fairly recent reference, from December 2005 on a [a href=\"http://www.dalsa.com/pi/documents/2005_DALSA_IEDM_Presentation.pdf]Dalsa 28MP color sensor with binning[/url]
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling. The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.

What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.

[Warning about the summary on page 38: when electronic engineers talk about S/N ratios in sensors, a factor of two is 6dB, not 3dB.]

These numbers correspond to what one gets if read-noise (in electrons RMS) is not increased, since 4:1 binning will quadruple signal in electrons, which in turn will double photon shot noise, as it is proportional to the square root of signal.

This Dalsa claim also implies that dark current noise is rather low compared to read-noise, as otherwise binning would surely about double dark current noise (assuming independence of the dark noise at the four binned photosites, which remember are not even quite adjacent to each other). If dark current noise dominates at low light levels, S/N ratio would only about double at low light levels with 4:1 binning.

This fits with what I have heard lately: dark current noise is only significant in long exposures, of order of a second or longer, not at "hand-holdable" shutter speeds. At least for CCD sensors; maybe good CMOS sensors have read-noise low enough to be comparable to dark current noise.
Title: larger sensors
Post by: John Sheehy on January 11, 2007, 01:42:41 pm
Quote
But that's 'analog think', John   . In my all-digital system, there will be lots of 'cliff edges'; instances where noise is equal to the signal. There are no half measures. A pixel element is either switched on for perfect, noise-free sharpness, or it's black. You don't get a microscopic black speck on a red flower petal that an ordinary camera lens can pick up. An algorithm should be able to work out, 'Hey, that black speck shouldn't be there', and turn the pixel on.

If your pixels are so small or insensitive that some don't get a photon in a red flower petal that is illuminated, there will be no black speck; it will just "not" contribute at all to local luminance, as it probably shouldn't.  Tiny pixels like that won't be intended mainly for 100% view; they will contribute to the overall local luminace.  An area that is all red, except for one black pixel, is clipped.  The highlights should only have a majority of pixels turned on in any given area; never all.  THAT is clipping.

Quote
On the other hand, if there was a cluster of black specks, the algorithm would let them be.
[a href=\"index.php?act=findpost&pid=95088\"][{POST_SNAPBACK}][/a]

If you give it enough thought, I think you will realize that there is no value in trying to outsmart shot noise.  It will only lead to more noise.  Shot noise is actually the very fabric of light.  You can't figure out a better truth than what it is telling you; if you want less shot noise, relative to signal, get more signal.  Don't fabricate it.
Title: larger sensors
Post by: bjanes on January 11, 2007, 05:19:05 pm
Quote
Here is one fairly recent reference, from December 2005 on a Dalsa 28MP color sensor with binning (http://www.dalsa.com/pi/documents/2005_DALSA_IEDM_Presentation.pdf)
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling. The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.
 as otherwise binning would surely about double dark current noise (assuming i
What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.

[Warning about the summary on page 38: when electronic engineers talk about S/N ratios in sensors, a factor of two is 6dB, not 3dB.]

These numbers correspond to what one gets if read-noise (in electrons RMS) is not increased, since 4:1 binning will quadruple signal in electrons, which in turn will double photon shot noise, as it is proportional to the square root of signal.

[a href=\"index.php?act=findpost&pid=95099\"][{POST_SNAPBACK}][/a]


Thanks for the info, BJL. The reference does show that John's software binning is not as effective as hardware binning at low levels of illumination. Since the S:N is 4x improved with 4:1 binning, the output of the superpixel is quadrupled as expected, but the read noise for the superpixel is hardly more than for that of a single pixel. However, at higher levels of illumination, the S:N advantage drops to 2:1 when shot noise predominates. In the latter instance, the read noise of the superpixel increases. This is what one might expect from the known effects of ISO on read noise: when more electrons are read with a lower ISO, the read noise increases. Therefore, at higher levels of illumination, hardware binning has no advantage.

Since a 7 MP image has enough image detail for an excellent 8 by 10 inch print, the Dalsa chip has a very nice feature there.

The reference is also interesting, since it shows how binning can be accomplished with a Bayer array. That is novel.

Bill
Title: larger sensors
Post by: bjanes on January 11, 2007, 05:27:30 pm
Quote
Capturing more photons is the only way to "deal with" shot noise.  Read noise is what needs to be dealt with.

[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

Yes, indeed, even though it may seem counterintuitive, the higher the shot noise the better since the S:N improves. An ISO 100 capture has more shot noise than an ISO 3200 capture.

Bill
Title: larger sensors
Post by: John Sheehy on January 11, 2007, 05:55:20 pm
Quote
Yes, indeed, even though it may seem counterintuitive, the higher the shot noise the better since the S:N improves. An ISO 100 capture has more shot noise than an ISO 3200 capture.
[a href=\"index.php?act=findpost&pid=95170\"][{POST_SNAPBACK}][/a]

This is why I often use the qualifiers "absolute" and "relative".  Shot noise is higher, in an absolute sense, with a stronger signal.  However, it is smaller, *relative* to the signal.

If you use the camera's metered exposure, the ISO 3200 image will have less absolute shot noise than the ISO 100 images, but it will be more visible in the 3200, especially in the highlights, because it is stronger relative to signal.
Title: larger sensors
Post by: BJL on January 11, 2007, 07:35:02 pm
Quote
The reference is also interesting, since it shows how binning can be accomplished with a Bayer array. That is novel.
[a href=\"index.php?act=findpost&pid=95168\"][{POST_SNAPBACK}][/a]
Yes, Bayer array binning seems to be the latest trend: Kodak also does it in a new 10MP 4/3" format interline CCD sensor, the KAI-10100. That one also does 2:1 binning, giving 5MP. (This is probably the sensor in the Olympus E-400).
Title: larger sensors
Post by: John Sheehy on January 11, 2007, 09:23:26 pm
Quote
Here is one fairly recent reference, from December 2005 on a Dalsa 28MP color sensor with binning (http://www.dalsa.com/pi/documents/2005_DALSA_IEDM_Presentation.pdf)
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling.

It's different from downsampling in some ways.  It takes the image to a deeper bit depth (which would take an extra processing step with dowsampling), and has no filtering, other than the high frequencies of the original lost in the process.  I don't see how software 2x2 binning would be any different than hardware 2x2 binning, other than the potential 1-stop decrease in blackframe read noise.  That would be the single benefit, AFAICT (other than write speed, and storage concerns, etc).   It would seem to me that enabling this mode is something you'd only want to do in special circumstances, and I'd certainly hope that the binning was not the only way to provide the higher ISOs; you will still get more detailed highlights and midtones without the binning.  

I wonder how close to 0.25x the read noise really gets.

Quote
The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.

What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.

About the same as software binning.

Quote
This Dalsa claim also implies that dark current noise is rather low compared to read-noise, as otherwise binning would surely about double dark current noise (assuming independence of the dark noise at the four binned photosites, which remember are not even quite adjacent to each other). If dark current noise dominates at low light levels, S/N ratio would only about double at low light levels with 4:1 binning.

This fits with what I have heard lately: dark current noise is only significant in long exposures, of order of a second or longer, not at "hand-holdable" shutter speeds. At least for CCD sensors; maybe good CMOS sensors have read-noise low enough to be comparable to dark current noise.
[a href=\"index.php?act=findpost&pid=95099\"][{POST_SNAPBACK}][/a]

Here's my 20D with 100% crops of the RAW greyscale at 6 different shutter speeds.  Black frames, windowed at 128 to 160 ADUs, with 2.2 gamma applied (effectively ISO 1600 pushed to ISO 102,400):

(http://www.pbase.com/jps_photo/image/72945525.jpg)

Clearly, only the 30s is significantly more noisy than the 1/1000.  I did 1/8000 too, but there were 7, and 6 are easier to format.  The 1/8000 was like the 1/1000.  The Std dev is 9.0 at 30s, 5.2 at 4 seconds, and 4.7 at 1/2 through 1/8000.  The max value is 4095 at 30s, 3179 at 4s, 653 at 1/2, and 263 at 1/15, with no significant reduction with shorter "exposures".
Title: larger sensors
Post by: Ray on January 11, 2007, 09:45:42 pm
Quote
If your pixels are so small or insensitive that some don't get a photon in a red flower petal that is illuminated, there will be no black speck; it will just "not" contribute at all to local luminance, as it probably shouldn't. 

John,
That's quite right! An actual, totally 'black speck', as seen on the monitor, could only occur if all the 27 sub-pixel elements were switched off. (Remember, I'm talking about a 6 micron Foveon type pixel consisting of 9x2 micron Foveon sub-pixels. 27 photon collectors in total for each pixel seen on the monitor.)

The luminance range of the 6 micron Foveon pixel stretches from 'all 27 sub-pixels off' (black) to 'all 27 sub-pixels on' (white). The possible number of values in between these two extremes is given by 8 to the power of 9, ie 134m.

Clearly, it doesn't make any visible difference if a single sub-pixel is switched off when it should be switched on. The change in tonality of the 6 micron pixel would be altered so slightly, one wouldn't notice. But a few random sub-pixels, within the group of 27, that are in the wrong state, could make a visual difference.

Quote
If you give it enough thought, I think you will realize that there is no value in trying to outsmart shot noise.  It will only lead to more noise.  Shot noise is actually the very fabric of light.  You can't figure out a better truth than what it is telling you; if you want less shot noise, relative to signal, get more signal.  Don't fabricate it.

Maybe you are right. However, I haven't stated that shot noise will be distinguished from other types of noise in such a system. The purpose is to get as accurate a signal as possible. It makes no difference to the final result what the source of the noise is. Noise is noise whatever the source, ie. inaccuracy.

In these examples of isolated sub-pixels which are in the wrong state, it seems to me, if I've understood the nature of shot noise, that shot noise will often be a contributing factor. Let's look at what I image happens to a sub-pixel in the 'cliff edge' situation. Noise (from all sources) is 50.1%; signal is 49.9%. The sub-pixel is switched off because the signal threshhold for switching the sub-pixel on, has not been reached. We don't actually know that the signal is actually 49.9%. It doesn't really matter. The reality is, the sub-pixel is in a state of 'off' when it should be 'on'. How do we know it should be 'on'? Because there's no reason for it to be 'off' (except noise) if it's surrounded by a cluster of  sub-pixels which are on.

Anyway, maybe this is a just a red herring and there's no need for an algorithm to make such decisions. My imaginary system is not founded on such a procedure. It simply occurred to me that maybe this could be a method of tackling photonic noise. If there are too many errors due to insufficient light, then maybe nothing can be done except increase exposure.

So let's ignore this imaginary noise reduction system, which would take an enormous amount of processing power anyway, and concentrate on the fundamental principle of a 6 micron Foveon type pixel that gets its tonality from the on/off states of 27 sub-pixel photon collectors.

Any flaws in that idea?  
Title: larger sensors
Post by: Ray on January 11, 2007, 11:23:15 pm
Continuing with the shot noise concept, let's try and flesh this out a bit more.

We have a photon detector that has received 50% of its signal from noise and 50% from the photographed target, through the lens. The color is red. The total signal strength is just below the threshhold for switching the photon detector 'on'.

The neighbouring 'red' photon detector has received the same amount of other-than-photonic noise but a higher degree of photonic noise, so the signal through the lens is, say 52% and total (non-photonic) noise 48%. The total signal strength, however, is greater by a factor that pushes it beyond the 'switch on' threshhold.

We have 2 adjacent photon detectors that have received a borderline signal strength. Whatever the signal strength in our all-digital system, at the most fundamental level there's only right or wrong, on or off.

It doesn't matter if a particular 'borderline' detector has been switched on due to a random increase in non-photonic noise, or a random increase in photonic noise. The question is, 'which state is more accurate, on or off?"
Title: larger sensors
Post by: bjanes on January 12, 2007, 07:42:06 am
Quote
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling. The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.

What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.
Quote
About the same as software binning.
[a href=\"index.php?act=findpost&pid=95205\"][{POST_SNAPBACK}][/a]
[a href=\"index.php?act=findpost&pid=95099\"][{POST_SNAPBACK}][/a]

That is not how I understand hardware binning. In the case with a small electron count, the four pixels are binned into one superpixel, but the read noise is the same as for one of the smaller unbinned pixels. In the case of software binning you have four reads with their accompaning noise combined in the resulting downsized pixel.

Bill
Title: larger sensors
Post by: John Sheehy on January 12, 2007, 08:06:21 am
Quote
That is not how I understand hardware binning. In the case with a small electron count, the four pixels are binned into one superpixel, but the read noise is the same as for one of the smaller unbinned pixels. In the case of software binning you have four reads with their accompaning noise combined in the resulting downsized pixel.

[a href=\"index.php?act=findpost&pid=95250\"][{POST_SNAPBACK}][/a]

Yes, but I replied to the shot noise figure.
Title: larger sensors
Post by: bjanes on January 12, 2007, 08:47:45 am
Quote
Yes, but I replied to the shot noise figure.
[{POST_SNAPBACK}][/a]
 (http://index.php?act=findpost&pid=95251\")
Quote
Capturing more photons is the only way to "deal with" shot noise.  Read noise is what needs to be dealt with.
[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

John,

At times you stress read noise as shown above, but when it is convenient you ignore it. If you are in a low light situation where read noise is predominant, it is not wise to ignore it. Hardware binning is one solution and it is widely used in scientific applications: [a href=\"http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html]Nikon on Binning[/url]

Bill
Title: larger sensors
Post by: John Sheehy on January 12, 2007, 09:21:03 am
Quote
At times you stress read noise as shown above, but when it is convenient you ignore it. If you are in a low light situation where read noise is predominant, it is not wise to ignore it. Hardware binning is one solution and it is widely used in scientific applications: Nikon on Binning (http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html)
[a href=\"index.php?act=findpost&pid=95259\"][{POST_SNAPBACK}][/a]

Bill, you are really being obnoxious now.  It is pretty obvious that you're trying to make me look stupid.  I've been polite up to now.

I didn't ignore the read noise issue; I didn't comment on it IN THAT SENTENCE.  My point is that other than read noise potential, software binning is just as good as hardware binning.

Hardware binning is not without compromise.  You lose detail.  Hardware binning is only without compromise when you don't want the detail.
Title: larger sensors
Post by: John Sheehy on January 12, 2007, 09:27:53 am
Quote
At times you stress read noise as shown above, but when it is convenient you ignore it. If you are in a low light situation where read noise is predominant, it is not wise to ignore it. Hardware binning is one solution and it is widely used in scientific applications: Nikon on Binning (http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html)
[a href=\"index.php?act=findpost&pid=95259\"][{POST_SNAPBACK}][/a]

Yes, that's a nice applet, but remember, it is theoretical.  It would be interesting to see some real-world data.  It doesn't include all noises.  All cameras have read noise that is directly proportional to signal strength, and never achieve the theoretical S/N for the extreme highlights.  My XTi never goes above 100:1, for instance.
Title: larger sensors
Post by: BJL on January 12, 2007, 09:48:15 am
Quote
I don't see how software 2x2 binning would be any different than hardware 2x2 binning, other than the potential 1-stop decrease in blackframe read noise.
[a href=\"index.php?act=findpost&pid=95205\"][{POST_SNAPBACK}][/a]
Indeed, less shadow noise is the main image quality benefit: that is, I believe, why binning is so often used in situations like astronomy. The Kodak KAI-10100 color binning sensor has so far been officially announced only in a special astro-photography camera.
However, maybe for everyday photography as opposed to technical work, the realm where read-noise is significant is or will soon be at such low light levels that S/N ratio will always be unacceptable anyway due to photon shot noise. Then the best solution is probably setting the black point higher than this signal level, eliminating the noise entirely, and never mind binning.

Anther benefit of binning over down-sampling is faster frame rate: the Dalsa source I mention above says three times faster readout for 4:1 binning. This is because a major speed bottle neck of CCD read out is the reading a line of pixels along the edge of the sensor, and with binning, this is done with only the reduced number of super-pixels. By the way, reducing read-rate in pixels per second can reduce read-noise, so binning and still using the same frame rate could further reduce shadow noise.

Putting these two together, a sensor binning say from 16MP to 8MP (2:1) or 4MP (4:1) gains almost all the advantages of using a sensor of lower pixel count to start with: less shadow noise at a give exposure level and higher frame rates. This could eliminate the lat main arguments against pushing pixel counts up to the maximum resolution level set by lenses or the needs of the user (so long as pixels stay big enough for good performance at lower ISO).

Even with on-sensor binning, down-sampling could still has its place too, like getting intermediate pixel count reductions less than a factor of 2 or 4.
Title: larger sensors
Post by: bjanes on January 12, 2007, 10:11:31 am
Quote
Anther benefit of binning over down-sampling is faster frame rate: the Dalsa source I mention above says three times faster readout for 4:1 binning. This is because a major speed bottle neck of CCD read out is the reading a line of pixels along the edge of the sensor, and with binning, this is done with only the reduced number of super-pixels. By the way, reducing read-rate in pixels per second can reduce read-noise, so binning and still using the same frame rate could further reduce shadow noise.

Putting these two together, a sensor binning say from 16MP to 8MP (2:1) or 4MP (4:1) gains almost all the advantages of using a sensor of lower pixel count to start with: less shadow noise at a give exposure level and higher frame rates. This could eliminate the lat main arguments against pushing pixel counts up to the maximum resolution level set by lenses or the needs of the user (so long as pixels stay big enough for good performance at lower ISO).

Even with on-sensor binning, down-sampling could still has its place too, like getting intermediate pixel count reductions less than a factor of 2 or 4.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=95270\")

As BJL has pointed out before, one should not assume that CCDs and CMOS sensors work alike. From a theoretical standpoint, I'm not sure that hardware binning would be of much use with [a href=\"http://www.dalsa.com/markets/ccd_vs_cmos.asp]CMOS[/url] where the output from the pixel is already in the form of voltage rather than an electron packet in the case of CCD. Since John works mainly with CMOS, perhaps he is right afterall for his camera.

Bill
Title: larger sensors
Post by: Ray on January 12, 2007, 05:32:01 pm
I shall certainly be glad when my all-digital system hits the market. Won't have to worry about these issues   . With any signal above the noise floor (including shot noise) we'll get a perfect, pixel-sharp rendition; perfect within the resolution limits of the system, that is.

In fact, I imagine the first production models will receive a lot of criticism, just as the first audio CDs did. The defects in exisiting lenses will become much more apparent and the experts will have to explain that previously such defects were masked by read noise, shot noise, AA filters and so on, but some people will insist that they prefer the old mushy results they'd been accustomed to   .
Title: larger sensors
Post by: Ray on January 12, 2007, 07:08:43 pm
One of things that has always worried me about our current analog/digital cameras is the sheer waste of lens resolution that occurs. For a camera such as the Canon 5D, for example, a lens ideally needs to have a strong MTF performance up to 50 lp/mm. Beyond that resolution, MTF can be crap as far as the sensor is concerned.

In fact, if it were possible to design a lens with a steep MTF fall-off beyond 50 lp/mm, results would be better with a cameras like the 5D because it could dispense with its AA filter.

This factor has provided much fuel to the debate of film versus digital. We know that B&W films such as T-Max 100 have an MTF response as high as 60% at 100 lp/mm. We know, with the appropriate sturdy tripod and MLU and with a bit of luck with film flatness, that we can capture 100 lp/mm with a good 35mm lens and a contrasty scene. Not even the next generation of FF 35mm DSLRs will be able to achieve this. However, don't think for one moment I am recommending a return to film. I am merely pointing that there is more resolving power in 35mm lenses than is able to be exploited with our current analog sensors.

The reason for this, I believe, is due to noise. A pixel from a current digital camera, whatever the strength of the signal and however good the lighting, will always contain a portion of noise. The higher the signal strength, the smaller the noise becomes as a proportion of the signal. At some point the noise becomes insignificant and of no practical concern, but it's still there, embedded within the signal (at least some of it. The stuff that hasn't been removed with black frame subtraction etc).

If a pixel is small and unable to collect many photons, the noise will be quite significant even in good lighting. Even when the well is full, the embedded noise will likely be noticeable. If we were to pack 2 micron photon detectors on a 35mm sensor, the resolving power of the sensor would be enormous (about 250 lp/mm).

Unfortunately, even the best lenses, like the discontinued Canon 200/1.8 at f4, would deliver a pretty weak signal at 250 lp/mm, so let's be realistic and not set our sights above 100 lp/mm. At 100 lp/mm the signal is still going to be pretty low. Our 2 micron analog photon detectors would pick it up, but in many cases (too many) the signal would be hardly greater than the noise. Who would be interested in lots of pixels that consisted of, say 45% noise?

Now back to my all-digital system. 45% noise? No problem. Switch the pixel on for perfect clarity   .
Title: larger sensors
Post by: BJL on January 15, 2007, 12:53:16 pm
Quote
From a theoretical standpoint, I'm not sure that hardware binning would be of much use with CMOS (http://www.dalsa.com/markets/ccd_vs_cmos.asp) where the output from the pixel is already in the form of voltage rather than an electron packet in the case of CCD.
[a href=\"index.php?act=findpost&pid=95275\"][{POST_SNAPBACK}][/a]
So long as the signal is still analog, it is susceptible to additional read-noise from analog processes like charge-to-voltage conversion, pre-amplification and A/D conversion, so true (hardware) binning could still be useful on a CMOS sensor. However if CMOS sensors can amplify the signal significantly right at the photo-site, the effect of subsequent noise could be reduced to insignificant levels.

A more extreme possibility is A/D conversion right at the photo-site. This is apparently used is some special sensors used in some surveillance cameras, or at least proposed for that use. (These are the same ones that eliminate DR limitations of highlight headroom completely, by reading out highlight pixels earlier.)
Title: larger sensors
Post by: bjanes on January 15, 2007, 03:15:55 pm
Quote
So long as the signal is still analog, it is susceptible to additional read-noise from analog processes like charge-to-voltage conversion, pre-amplification and A/D conversion, so true (hardware) binning could still be useful on a CMOS sensor. However if CMOS sensors can amplify the signal significantly right at the photo-site, the effect of subsequent noise could be reduced to insignificant levels.

A more extreme possibility is A/D conversion right at the photo-site. This is apparently used is some special sensors used in some surveillance cameras, or at least proposed for that use. (These are the same ones that eliminate DR limitations of highlight headroom completely, by reading out highlight pixels earlier.)
[a href=\"index.php?act=findpost&pid=95841\"][{POST_SNAPBACK}][/a]

As I understand CMOS as explained in the Dalsa reference, the output of the  CMOS pixel is already in the form of analog voltage, the pre-amplification and charge to voltage conversion having been done by the circuitry on each pixel site. The A/D conversion involves converting the voltage to a pixel value. Read noise would not be involved at this stage, the conversion having been done on the pixel site.

Bill
Title: larger sensors
Post by: John Sheehy on January 15, 2007, 05:14:15 pm
Quote
Since John works mainly with CMOS, perhaps he is right afterall for his camera.
[a href=\"index.php?act=findpost&pid=95275\"][{POST_SNAPBACK}][/a]

Right about what?

Definition of gain?

Image noise vs pixel noise?

Hardware and software binning pretty much the same for everything but read noises on the chip (there are other read noises).

I'm not even sure we're in the same conversation sometimes.  You seem to read far too much innuendo into what I write.

If I say something like "hardware binning for reduced read noise over software binning is not without compromise", it doesn't mean I'm shooting the idea down, with my thumbs pointing at the floor, and giving a rasberry.  It means that there is a compromise.  I'd certainly want the hardware binning if I was recording a movie, or if the image was going to be reduced anyway.
Title: larger sensors
Post by: John Sheehy on January 15, 2007, 05:21:26 pm
Quote
However, maybe for everyday photography as opposed to technical work, the realm where read-noise is significant is or will soon be at such low light levels that S/N ratio will always be unacceptable anyway due to photon shot noise. Then the best solution is probably setting the black point higher than this signal level, eliminating the noise entirely, and never mind binning.[a href=\"index.php?act=findpost&pid=95270\"][{POST_SNAPBACK}][/a]

Raising the blackpoint will only reduce the visibility of noise if the new blackpoint falls into a range that has no signal below it that has noise peaks surpassing the new blackpoint, and there is no signal in the immediate upper range of the new blackpoint.
If you have an image of a dark gradient, darkest on the left, and brightest on the right, raising the blackpoint will blacken the left edge, hiding noise, but the range with the new blackpoint gets even noisier.  Raising the the blackpoint helps mainly when there is a black or almost black area, and there is a huge gap in the histogram above it.
Title: larger sensors
Post by: bjanes on January 15, 2007, 05:46:46 pm
Quote
Bill, you are really being obnoxious now.  It is pretty obvious that you're trying to make me look stupid.  I've been polite up to now.
[a href=\"index.php?act=findpost&pid=95264\"][{POST_SNAPBACK}][/a]

John, you are far from stupid and in previous posts I have acknowledged that when I have disagreed with you in the past, I have usually been wrong. However, none of us correct all the time and sometimes the teacher can learn from the student.

Bill
Title: larger sensors
Post by: John Sheehy on January 15, 2007, 05:52:33 pm
Quote
However, none of us correct all the time and sometimes the teacher can learn from the student.
[a href=\"index.php?act=findpost&pid=95880\"][{POST_SNAPBACK}][/a]

I agree; in fact, you made a statement a few weeks ago in which you said that whenever we disagree, I turn out to be right, and I almost replied to say that you should never believe something to be 100% true just because I wrote it.  My contributions, though very confident at times, are meant to be food for thought, not dogma.

However, I really don't know what it is that I'd be wrong about for CCDs (but not CCDs).  It's only fair to say what you think someone is wrong about when you imply that they could be wrong.
Title: larger sensors
Post by: BJL on January 16, 2007, 05:01:53 pm
Quote
As I understand CMOS as explained in the Dalsa reference, the output of the  CMOS pixel is already in the form of analog voltage, the pre-amplification and charge to voltage conversion having been done by the circuitry on each pixel site.
[a href=\"index.php?act=findpost&pid=95859\"][{POST_SNAPBACK}][/a]
That seems reasonable. But pre-amplification still happens, just in a different place, and that still opens the possibility that binning the electrons from several pixels before pre-amplification could be useful in reducing the effect pre-amplifier noise. But binning would be done "closer to home" before moving the electrons to the edge of the sensor as ins doe with CCD binning.

By the way, it seems that most read noise with CCD's occurs during the process of moving the electrons from each photo-site to the edges and then corners of the sensor at high rates, with lower read rates one way of substantially reducing total dark noise in scientific sensors. (Why does no CCD DSLR have a lower noise, very low frame rate mode?) So if CMOS sensors can pre-amplify before this moving, they can avoid that major source of dark noise, allowing well implemented CMOS sensors to have far lower total dark noise.
Title: larger sensors
Post by: BJL on January 16, 2007, 05:16:29 pm
Quote
If you have an image of a dark gradient, darkest on the left, and brightest on the right, raising the blackpoint will blacken the left edge, hiding noise, but the range with the new blackpoint gets even noisier.
[a href=\"index.php?act=findpost&pid=95875\"][{POST_SNAPBACK}][/a]
Can you explain why? I am imagining for example setting the black point at 25 electrons (below which S/N ratio is at best a miserable 5:1) so pixels with signals less than 25 are set to level zero (pure black) and all "better lit" pixels keeping the same signal and same noise and so the same S/N ratio. Why would visible noise level increase in those unchanged brighter than black point pixels?

Perhaps I am misusing the words "black point". Or perhaps there are problems with an abrupt cut-off, even though it is only turning very dark gray to black. Maybe a roll-off at low pixel levels would work better.
Title: larger sensors
Post by: John Sheehy on January 16, 2007, 06:37:08 pm
Quote
Can you explain why? I am imagining for example setting the black point at 25 electrons (below which S/N ratio is at best a miserable 5:1) so pixels with signals less than 25 are set to level zero (pure black) and all "better lit" pixels keeping the same signal and same noise and so the same S/N ratio. Why would visible noise level increase in those unchanged brighter than black point pixels?

Because the signal-to-noise ratio is no longer the same when you raise the blackpoint.  The signal is now closer to zero, just above the blackpoint.  Whatever subject matter you have in that range, will replace your old near-black, with more shot noise (and scalar line noise to a smaller degree), since the new black was originally captured as signal.  Pushing low signals towards black by raising blackpoint does not lower its absolute noise.  With a Canon 20D at ISO 1600, blackframe read noise is 4.7 ADU, which is about 3.75 electron units.  The total noise is (5^2+3.75^2)^0.5 = ~6.5, 30% greater than at the true blackpoint.  Then, in order to maintain the same DR, in the output, the contrast of this noise is increased slightly for small increases in blackpoint, and more visibly for larger ones.

I am assuming that the adjusted RAW DR will represent the same original DR in the output medium.

Quote
Perhaps I am misusing the words "black point". Or perhaps there are problems with an abrupt cut-off, even though it is only turning very dark gray to black. Maybe a roll-off at low pixel levels would work better.
[a href=\"index.php?act=findpost&pid=96046\"][{POST_SNAPBACK}][/a]

Perhaps what you want to do is clip as a lower limit at a greypoint; IOW, everything at 25 electrons and less is now 25; rather than 0.  That would keep the visible S/N up in that range.  In fact, you can just add some value to true black and get a similar effect, without losing anything.  This is effectively what the LCD on the camera does; it doesn't have true black, so the noise is harder to see.

Abrupt cut-offs are a problem, in many ways.  Even blackpointing at real black can cause inferior results with further processing; any kind of non-sensor binning or downsampling works out better with data that hasn't been blackpointed yet, as there is almost equally positive and negative noise about black; clipping first before downsizing (or stacking, as well) results in non-linear deep shadows, since all the contributions are 0 or positive, and more of the deep signal has been clipped away as well.
Title: larger sensors
Post by: Ray on January 16, 2007, 10:58:24 pm
This is all very interesting and illuminating. John Sheehy really seems to know his stuff   .

However, I'd really like some feed-back on my all-digital idea. You're all very quiet on this issue; perhaps because none of you want to make me appear like a complete chump (very nice of you   ).

I'm struggling to find any major flaw in the idea, apart from the manufacturing difficulties of producing a Foveon type chip with so many pixels and the slow speed of processing such large RAW files with current processors.

My first objection to this idea was the likelihood there wouldn't be sufficient tonality. If there's no distinction to be made between a 'sub-pixel element' being switched on by a strong signal or by a weak signal, then possibly tonality goes down the drain.

However, 2 micron photon detectors simply would not receive strong signals from 35mm lenses. The whole idea of the Olympus 4/3rds system was that 35mm lenses could not resolve anything below 5 microns, but Zuiko lenses could. The 5 micron limit for current analog DSLRs and MFDBs is due to the fact that analog systems have poor S/N ratios below this size. The pixels would be too noisy for a quality system.

All that's required (conceptually) in my all-digital system, is that the camera be aware of its own noise. Any signal, for any color, that results in an increase above that noise floor, results in the sub-pixel element being switched on.

Where's the flaw, please?
Title: larger sensors
Post by: BJL on January 18, 2007, 03:59:12 pm
Quote
Because the signal-to-noise ratio is no longer the same when you raise the blackpoint.  The signal is now closer to zero, just above the blackpoint.  Whatever subject matter you have in that range, will replace your old near-black, with more shot noise (and scalar line noise to a smaller degree), since the new black was originally captured as signal.
[a href=\"index.php?act=findpost&pid=96069\"][{POST_SNAPBACK}][/a]
It seems I was misusing black-point, or using it differently than you. I was thinking of something often done in post-processing, where one declares that pixels at and below a certain level are transformed to level zero, with I suppose some scaling down of levels a a bit above that down to avoid a sudden drop from dark gray to pure black. This is a purely digital process, so the pixels not blacked out retain their S/N ratio, at least as far as the effects of noise from the analog stages. Hopefully discretization noise (from rounding the new smaller levels to integer levels) is not to noticeable.
Title: larger sensors
Post by: John Sheehy on January 18, 2007, 09:29:53 pm
Quote
It seems I was misusing black-point, or using it differently than you. I was thinking of something often done in post-processing, where one declares that pixels at and below a certain level are transformed to level zero, with I suppose some scaling down of levels a a bit above that down to avoid a sudden drop from dark gray to pure black. This is a purely digital process, so the pixels not blacked out retain their S/N ratio, at least as far as the effects of noise from the analog stages. Hopefully discretization noise (from rounding the new smaller levels to integer levels) is not to noticeable.
[a href=\"index.php?act=findpost&pid=96449\"][{POST_SNAPBACK}][/a]

What you explain above still sounds like clipping.  If a level of 25 electrons has a S/N of 4, if you reduce 25 electrons to 0 electrons, you still see the image as if it were a signal of 0 electrons , but with 6.5 electrons of noise instead of 5.  You don't see the S/N that it is supposed to have.

What I think something like ACR's "shadows" control does is apply a curve, so that the contrast of both the signal and the noise are reduced in these shadow regions (and increased in midtones).
Title: larger sensors
Post by: John Sheehy on January 21, 2007, 07:34:09 am
Quote
What you explain above still sounds like clipping.  If a level of 25 electrons has a S/N of 4, if you reduce 25 electrons to 0 electrons, you still see the image as if it were a signal of 0 electrons , but with 6.5 electrons of noise instead of 5.[a href=\"index.php?act=findpost&pid=96509\"][{POST_SNAPBACK}][/a]

5 was not the figure I intended.  5 is the shot component of the 25 electron signal.  What used to be at zero before the clipping was the blackframe read noise; 3.75 in the 20D example.  So, instead of just having 3.75 electron units of noise at the black of the resulting image, you have 6.5 ((25+3.75^2)^0.5); almost twice as much.  The statistics will actually scale to about 60% of the pre-clipping/pre-blackpointed values, as the curve is sliced in half and the mean moves above the clipping point, but this happens to both the 0 electron clip and the 25-electron clip.
Title: larger sensors
Post by: BJL on January 22, 2007, 11:58:51 am
Quote
5 is the shot component of the 25 electron signal.  What used to be at zero before the clipping was the blackframe read noise; 3.75 in the 20D example.  So, instead of just having 3.75 electron units of noise at the black of the resulting image, you have 6.5 ((25+3.75^2)^0.5); almost twice as much
[a href=\"index.php?act=findpost&pid=96820\"][{POST_SNAPBACK}][/a]
You have lost me, so let me describe more explicitly a modified proposal, based in part on your ideas. I will describe it in terms of electron counts as indicated by A/D convertor output levels. For concreteness, I consider a sensor with well capacity 25,000 (as in the 5.4 micron photo-sites of the Olympus E-500, so probably typical of current SLR's)

A/D output corresponding to 25 electrons or less: set to the same level as if no electrons were detected.
A/D output 25 to 25,000: scale linearly to the range 0-25,000, so 25->0, 25,000->25,000 and the slope in between is 1000/000 So roughly
25 -> 0
50->25.025
75->50.050

It seems to me that noise induced fluctuations between nearby pixels are amplified by the factor 1000/999, so essentially unchanged. Is that your point?

This raises a perceptual question: when looking at very dark parts of an image, but with eyes adapted to the overall luminosity level of the image, does the detectability of the noise fluctuations depend on the ratio of fluctuation size to the luminosity in that dark part of the image, or relative to the overall luminosity, or something in between?

If there really is a problem here, my next idea is simply increasing the amount of spatial averaging (noise reduction processing) done at low levels, so that at 25 electrons or less a lot of resolution is sacrificed to avoid visible noise. All this based on the idea that below about 100 photo-electrons, S/N is less than Kodak's "minimum acceptable" guideline of 10:1, and this should only ever be the case in deep shadows below the level of significant detail. (25 electrons is about 10 stops below that maximum signal of 25,000 seven stops below mid-tones at base ISO speed of say 100, so a good three stops below mid-tones even at sixteen times base ISO speed, or say 1600.)
Title: larger sensors
Post by: John Sheehy on January 23, 2007, 09:46:50 am
Quote
You have lost me, so let me describe more explicitly a modified proposal, based in part on your ideas. I will describe it in terms of electron counts as indicated by A/D convertor output levels. For concreteness, I consider a sensor with well capacity 25,000 (as in the 5.4 micron photo-sites of the Olympus E-500, so probably typical of current SLR's)

A/D output corresponding to 25 electrons or less: set to the same level as if no electrons were detected.
A/D output 25 to 25,000: scale linearly to the range 0-25,000, so 25->0, 25,000->25,000 and the slope in between is 1000/000 So roughly
25 -> 0
50->25.025
75->50.050
That's actually quite unnecessary.  Losing 25 out of 25000 electron counts reduces highlight headroom by only .0014 stops.

Quote
It seems to me that noise induced fluctuations between nearby pixels are amplified by the factor 1000/999, so essentially unchanged. Is that your point?
Not exactly; my basic point is that raising the blackpoint is just like moving a pile of dirt from one location to another, adding more dirt to it, and erasing the original location.  The noise isn't "down there", per se, to be clipped away.  The noise is everywhere; at every tonal level.  In an absolute measurement, noise increases at higher tonal levels, but generally decreases relative to signal.  When you take a signal that used to be above black, and make it black by clipping to 0, you have a new black with a higher level of noise than what used to be the noise level at the old black.  The more you raise the blackpoint, the more sharply the S/N ratio drops in the range immediately above the clipping point.  The only time that simply raising the blackpoint works is when you raise it to a tonal level where no signal will be directly above it; then there is no signal to experience the great loss in S/N.

Quote
This raises a perceptual question: when looking at very dark parts of an image, but with eyes adapted to the overall luminosity level of the image, does the detectability of the noise fluctuations depend on the ratio of fluctuation size to the luminosity in that dark part of the image, or relative to the overall luminosity, or something in between?
It seems that it exists, to an extent, relative to the scene, but things outside the frame have an effect, too.  You're going to see more shadow tones and noise in a dark room, in an image lacking in highlights, full-screen, than you will with the same levels with bright highlight areas, or in a window on a white desktop.

For the levels in your example; 25 electrons at ISO 100, I doubt you will see much of any change when you move the blackpoint, if you are not pushing the exposure index in your render.  You have to use the Shadow/highlight tool or something like it, agressively, to see such a change.  25 electrons is only 2 to 4 ADU at ISO 100 for DSLRs.

Quote
If there really is a problem here, my next idea is simply increasing the amount of spatial averaging (noise reduction processing) done at low levels, so that at 25 electrons or less a lot of resolution is sacrificed to avoid visible noise.
That should work; I don't know why more converter don't do something like that; it seems that they generally soften their highlights, too, when you apply agressive NR.  While waiting for the feature, you can can render one conversion with sharp dtail, and one with lots of NR, and use a luminance mask to apply one over the other.

Quote
All this based on the idea that below about 100 photo-electrons, S/N is less than Kodak's "minimum acceptable" guideline of 10:1, and this should only ever be the case in deep shadows below the level of significant detail. (25 electrons is about 10 stops below that maximum signal of 25,000 seven stops below mid-tones at base ISO speed of say 100, so a good three stops below mid-tones even at sixteen times base ISO speed, or say 1600.)
[a href=\"index.php?act=findpost&pid=96997\"][{POST_SNAPBACK}][/a]
Well, with most current DSLRs, 25 electrons of signal at ISO 100 is only 2 to 4 ADU above black.  Your 25,000 electron ISO 100 would be about 4, so lets use that.  The read noise is much stronger than the shot noise there; read noise is about 2 ADU for a typical DSLR at ISO 100.  That's about 12 electrons, so you have a signal of 25 electrons, 5 electrons of shot noise, and 12 electrons of read noise.  That's a total noise of about 13 electrons, only 1 electron stronger than the read noise itself; the shot noise is almost totally irrelevant, as the read noise predominates by a wide margin at this level.  The lowest ISOs on DSLRs are mostly crippled by read noise, not shot noise, in the shadows.
Title: larger sensors
Post by: BJL on January 23, 2007, 11:32:29 am
Quote
That's actually quite unnecessary.  Losing 25 out of 25000 electron counts reduces highlight headroom by only .0014 stops.
[a href=\"index.php?act=findpost&pid=97159\"][{POST_SNAPBACK}][/a]
Indeed, I was being pedantic: alright, I could have just said subtract about 25e, which is apparently about 2 to 4 ADU, and zero out if the result is less than 0 ADU. Noise induced variations in the levels of nearby pixels stays the same but luminance decreases, another way of thinking about the problem you are talking about I think.

Quote
That [my new suggestion of more smoothing at low levels] should work; I don't know why more converter don't do something like that; it seems that they generally soften their highlights, too, when you apply agressive NR.  While waiting for the feature, you can can render one conversion with sharp dtail, and one with lots of NR, and use a luminance mask to apply one over the other.
[a href=\"index.php?act=findpost&pid=97159\"][{POST_SNAPBACK}][/a]
I think we agree on a plan then! Maybe at least good NR tools do something like this.

Quote
The lowest ISOs on DSLRs are mostly crippled by read noise, not shot noise, in the shadows.
[a href=\"index.php?act=findpost&pid=97159\"][{POST_SNAPBACK}][/a]
That seems to be the case, at least for now. Suggesting perhaps that one rule for choice of exposure index (ISO speed) is rather film like: use high enough EI get the levels of the shadow regions up to where you want them (within the constraints of highlight head-room), to protect the signal from read-noise introduced after pre-amplification, or part way through pre-amplification. And if all else fails, bin pixels!
Title: larger sensors
Post by: John Sheehy on January 23, 2007, 11:50:17 am
Quote
That seems to be the case, at least for now. Suggesting perhaps that one rule for choice of exposure index (ISO speed) is rather film like: use high enough EI get the levels of the shadow regions up to where you want them (within the constraints of highlight head-room), to protect the signal from read-noise introduced after pre-amplification, or part way through pre-amplification. And if all else fails, bin pixels!
[a href=\"index.php?act=findpost&pid=97177\"][{POST_SNAPBACK}][/a]

If you can bin as the Dalsa does in theory, with a real reduction in read noise beyond what software binning can do, you gain something, but you lose a lot, too, in a single image, in terms of resolution.  I once believed the mantra that less noise per pixel was good as an end in itself, but every simulation or experiment I try suggests that low resolution is worse than noise, and an exaggerater of existing noise.

The Dalsa approach might work very well if you take two images; one at full resolution, and one at 1/4MP resolution - and use a luminance mask from the low-res image to blend the images together.  Or, it could be used with multiple exposures in low-res mode with slight registration differences between exposures, and stacked with sub-pixel alignment (a good idea for a low-res, aliasing camera like the Sigma SD9, as well).
Title: larger sensors
Post by: BJL on January 23, 2007, 03:17:36 pm
Quote
If you can bin as the Dalsa does in theory ...
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=97179\")
... or as Kodak apparently does in practice in the new KAI-10100, with the option of 2:1 binning to non-square pixels for milder loss of resolution. See this PowerPoint on [a href=\"http://www.sbig.com/aic2006/AIC2006.PPT]SBIG's forthcoming astronomy cameras[/url]

Quote
I once believed the mantra that less noise per pixel was good as an end in itself, but every simulation or experiment I try suggests that low resolution is worse than noise, and an exaggerater of existing noise.
[a href=\"index.php?act=findpost&pid=97179\"][{POST_SNAPBACK}][/a]
I am inclined to agree; once the practical effects of getting the same prints size at higher PPI from the higher pixel count image (and/or the latitude for more NR processing), fewer, bigger pixels might lose a lot of its IQ appeal. Maybe all I want is good dynamic range at low ISO, and suitable noise processing at higher ISO. (Maybe binning is more relevant to technical work with extremes of dynamic range like astronomy.)

Quote
The Dalsa approach might work very well if you take two images; one at full resolution, and one at 1/4MP resolution ...
[a href=\"index.php?act=findpost&pid=97179\"][{POST_SNAPBACK}][/a]
But maybe if you have the time for two exposures, the second exposure can be long enough to have no need for binning or other resolution reduction.
Title: larger sensors
Post by: Ray on January 23, 2007, 11:58:42 pm
I notice at 'dpreview news' that Sharp have announced a 1/2.5" sensor with pixel pitch of just 1.75 microns.

Quote
Sharp Japan has announced a new 1/2.5" CCD, now packing a frankly shocking eight million pixels into an area measuring just 5.8 x 4.3 mm it has a pixel pitch of just 1.75 µm which Sharp are proud to announce is the smallest in its class. Is this a good thing? I think probably not, as we will see manufacturers cramming this sensor into existing designs with average quality lenses and then claiming to deliver high sensitivities such as ISO 1600. Progress marches on, at least the marketing department will be happy. (21:15 GMT)

Now, to get back to my concept of the true digital sensor, a 7 micron Foveon type pixel could consist of 16 Foveon sub-pixels, or 48 separate photon detectors over three layers.

The number of possible combinations (or colors) would then be 8 to the power of 16 which, according to my maths, represents a theoretically possible 2.8 thousand trillion colors. (Perhaps I'm out by a factor of 10, but never mind. We have room to move, here.   ).
Title: larger sensors
Post by: John Sheehy on January 25, 2007, 08:54:02 pm
Quote
I notice at 'dpreview news' that Sharp have announced a 1/2.5" sensor with pixel pitch of just 1.75 microns.
Now, to get back to my concept of the true digital sensor, a 7 micron Foveon type pixel could consist of 16 Foveon sub-pixels, or 48 separate photon detectors over three layers.

That's 48 photons maximum for a 7 micron pixel pitch.  That's way too few photons captured.  That will mean lots of shot noise.  True even if you make each photobit trigger when X number of photons have struck it.

Quote
The number of possible combinations (or colors) would then be 8 to the power of 16 which, according to my maths, represents a theoretically possible 2.8 thousand trillion colors. (Perhaps I'm out by a factor of 10, but never mind. We have room to move, here.   ).
[a href=\"index.php?act=findpost&pid=97262\"][{POST_SNAPBACK}][/a]

The number you have calculated, 281,474,976,710,656 - is the number of possible unique combinations within each superpixel, noting the status of each individual photobit.  You will not have any use for this information when you make your superpixel from the 48 bits.  All that matters is how many red, how many green, and how many blue photobits are triggered in each superpixel.  These values are 0 to 15. There are only 16^3 = 4096 colors possible for each 7u "superpixel".  These superpixels are far too large to have this little bit depth!  Your superpixels (Canon 10D size) are only 4 bits per color channel!

Finally, I understand what you are saying (your previous attempts didn't register with me), and finally, as you had hoped, your idea is shot down!  
Title: larger sensors
Post by: Ray on January 25, 2007, 10:40:52 pm
Quote
That's 48 photons maximum for a 7 micron pixel pitch.  That's way too few photons captured.  That will mean lots of shot noise.  [a href=\"index.php?act=findpost&pid=97569\"][{POST_SNAPBACK}][/a]

Thanks for exercising your mind on this, John. But I don't see this at all as being 48 photons. It's 48 distinct noise-free values grouped into parcels of 3, each parcel having a possible 8 different combinations of red, blue and green. The fact that a red value in one parcel is the same as a red value in another parcel, or that a green value in one parcel is the same as a green value in all other parcels, should not alter the fact that each parcel can have 8 different values. 16 parcels, each having a theoretical 8 different values, amount to a possible 2.8 thousand trillion colors for the final 7 micron pixel.

In practice, of course, you would not get anywhere near that number, even assuming we had the processing power. When we produce a high resolution image in 8 bit with a theoretical 16.7 million colors, there's almost always no where near that number of actual distinct colors in the image. It's probably more like 50,000, perhaps 100,000 at the most.
Title: larger sensors
Post by: John Sheehy on January 25, 2007, 11:03:15 pm
Quote
Thanks for exercising your mind on this, John. But I don't see this at all as being 48 photons.

It really doesn't matter if its 48 photons, or 48 potential thresholds requiring any number of photons.

Quote
It's 48 distinct noise-free values grouped into parcels of 3, each parcel having a possible 8 different combinations of red, blue and green. The fact that a red value in one parcel is the same as a red value in another parcel, or that a green value in one parcel is the same as a green value in all other parcels, should not alter the fact that each parcel can have 8 different values. 16 parcels, each having a theoretical 8 different values, amount to a possible 2.8 thousand trillion colors for the final 7 micron pixel.

No, they don't.  There are that many possible smaller images within the superpixel.  The number of RGB intensities as a 7u unit are extremely finite; there are only 4096 of them.  You're counting the number of possible images, not the total number of possible colors.

And your "noise-free" thing is nothing more than fantasy.  You sound like Yogi Bear, looking for free lunch.

You need lots of bit depth and/or lots of pixels to get high image color, not drawing a box around every group of 16 extremely inefficient pixels.
Title: larger sensors
Post by: Ray on January 25, 2007, 11:56:45 pm
Quote
The number of RGB intensities as a 7u unit are extremely finite; there are only 4096 of them.

That's within a 12 bit system, is it? With the future computing power that I'm talking about, we'll be way ahead of a mere 12 bits   .

Quote
It really doesn't matter if its 48 photons, or 48 potential thresholds requiring any number of photons.

It matters greatly. It's more of a threshold that's flexible between the noise floor of the system and a few photons above that noise floor, not any number of photons above that threshold. No 35mm lens can deliver 70% MTF at a 1.75 micron spacing. The difference between a sub-pixel element that's switched on with 500 photons as opposed to another pixel that's switched on with an 800 photon signal, represents the 'inaccuracy' of the system. I'm not trying to describe absolute perfection here.

In my system, we're gathering 'grass roots' data from the lens. Maximising the potential of the lens with perhaps a bit of help from DXO type algorithms.

Quote
You need lots of bit depth and/or lots of pixels to get high image color, not drawing a box around every group of 16 extremely inefficient pixels.

First, such pixels are not inefficient. All they require is any signal above the noise floor for 100% efficiency (within the resolution limits of the system).

The number of (say 7 micron) pixels depends on the size of the sensor and the processing power to handle that number. 2.8 thousand trillion colors is probably serious overkill   . You can change the pixel sizes and sub-pixel sizes to suit.
Title: larger sensors
Post by: John Sheehy on January 26, 2007, 06:46:26 pm
Quote
That's within a 12 bit system, is it? With the future computing power that I'm talking about, we'll be way ahead of a mere 12 bits   .

Your system IS a 12-bit system (4 bits per channel) for each 7u super-pixel.  The superpixel can only recognize 16 levels for each color channel.

Quote
It matters greatly. It's more of a threshold that's flexible between the noise floor of the system and a few photons above that noise floor, not any number of photons above that threshold.

Thresholds above the noise floor do not eliminate read noise, at all.  Read noise is there in every signal.  If your signal were somehow magically uniform, and high, the pattern of noise in the thresholding would be determined by the blackframe noise upon which the signal is added, if the signal level is at the threshold.  If the signal is a little below the threshold, you will have black for all sub-pixels, and black for the superpixel.  If the signal is above the threshold, then you will have all white.  Not a very useful system.  The only way a single threshold can be of much use is if the pixels are much smaller than your model, and can only detect one photon, and a good, strong exposure would not have any areas where all pixels were triggerd by a photon.  All ones for an area, no matter how small, is indistinguishable from clipping.

Quote
No 35mm lens can deliver 70% MTF at a 1.75 micron spacing. The difference between a sub-pixel element that's switched on with 500 photons as opposed to another pixel that's switched on with an 800 photon signal, represents the 'inaccuracy' of the system. I'm not trying to describe absolute perfection here.

I have no idea what you mean by that.  You need to be a little bit more specific in the details.

Quote
In my system, we're gathering 'grass roots' data from the lens. Maximising the potential of the lens with perhaps a bit of help from DXO type algorithms.
First, such pixels are not inefficient. All they require is any signal above the noise floor for 100% efficiency (within the resolution limits of the system).

That sounds like poetry.  It doesn't mean anything real and tangible to me.  Your idea reminds me of people who dream that they are making beautiful original music, and wake up thinking that they might be a latent musician.  When asked to reproduce the music, they can't remember it, just that it was beautiful.  The case may really be that they dreamt of the "feeling" of beautiful original music, but there was actually no real beautiful original music in the dream.  Same with your dreams of avoiding noise, and getting good IQ with your system.  Your system, if realized in the form of 16 foveon-like 1-bit subpixels comprising each 7u superpixel, will result in horrible posterization.  You need a lot more than 16 levels per color channel for pixels that big, unless your sensor is huge and/or there is a lot of noise.

Quote
The number of (say 7 micron) pixels depends on the size of the sensor and the processing power to handle that number. 2.8 thousand trillion colors is probably serious overkill   . You can change the pixel sizes and sub-pixel sizes to suit.
[a href=\"index.php?act=findpost&pid=97600\"][{POST_SNAPBACK}][/a]

There aren't that many colors in your system.  Your arithmetic is correct, but your application is wrong.  That is the number of possible *subimages* within each super-pixel, which you are clearly reducing to the sum of the on-pixels in the super-pixels.

For any given color channel, each of the following subimages results in the same super-pixel value for that color:

0000
0000
0000
0001

0000
0000
0000
0010

...

0100
0000
0000
0000

1000
0000
0000
0000

The only possibilities for each superpixel in a given color channel are 0 ones, 2 ones, 3 ones, etc, up to 15 ones.
Title: larger sensors
Post by: Ray on January 26, 2007, 08:54:26 pm
Quote
Your system IS a 12-bit system (4 bits per channel) for each 7u super-pixel.  The superpixel can only recognize 16 levels for each color channel.

Ah! I see now you have misunderstood my concept. You are treating the superpixel as though it's a separate physical entity receiving signals from the 16 sub-pixels. The 7 micron pixel is in fact just a collection of 16 sub-pixels each with its own values. These values (all 48 of them, since they are Foveon type sub-pixels) are read by (or passed on to) the in-camera computer and the information is 'summed, averaged, analyzed etc' within, say a 64 bit system in order to assign a value to each 'virtual' superpixel, which is the pixel you will see on your monitor.

In the particular example of 16 sub-pixels of 3 layers, there are a possible 8 to the power of 16 values (at least you agree with the maths). If we imagined the highly implausible situation where there are as many 7 micron 'virtual' pixels as there are combinations of these 48 sub-pixel elements, then each image theoretically could contain 2.8 thousand trillion different colors, if the computing power was there. The numbers are unrealistically large of course, but it's the principle I'm trying to get across.

My first example used a group of 9x2 micron sub-pixels giving a more realistic possible 134 million values which could theoretically be 'assigned' to each of the, say 134m 6 micron 'virtual' superpixels on a full frame MF sensor.

Quote
If the signal is a little below the threshold, you will have black for all sub-pixels, and black for the superpixel.

You would only have black for all sub-pixels if the entire image were black. But I gues you mean, if a small group of adjacent sub-pixels did not receive a sufficiently strong signal to turn on any of the colors, then the rendering would be black. Yes, of course it would. What's wrong with that? Black is a necessary component of photographs. If there's a black speck on a white background, then it has to be black. What else should it be?

Quote
If the signal is above the threshold, then you will have all white.  Not a very useful system.

Simply not true. If I point my all-digital camera at a red flower petal, then most of the red sub-pixel elements will be switched on but relatvely few of the green and blue because the blue and green signals are relatively weak. Many such blue and green signals will be below the noise threshhold.

There's probably not much point in my continuing to address each of your objections stated in your previous post because they are really based on a misunderstanding of the concept. Hope I've cleared this up   .
Title: larger sensors
Post by: Ray on January 26, 2007, 10:03:10 pm
Quote
No 35mm lens can deliver 70% MTF at a 1.75 micron spacing. The difference between a sub-pixel element that's switched on with 500 photons as opposed to another pixel that's switched on with an 800 photon signal, represents the 'inaccuracy' of the system. I'm not trying to describe absolute perfection here.[/QUOTE]

Quote
I have no idea what you mean by that. You need to be a little bit more specific in the details.

Okay! I'll be more specific. The fatal flaw in my concept might at first seem to be (and probably is), there will be insufficient differentiation between a strong signal, a moderately strong signal and a weak signal. My point is, photon detectors only 1.75 microns in diameter would not receive strong signals from a 35mm lens. If one imagines a FF 35mm sensor filled with 1.75 micron photosites (just a single layer) there would be about 280m of them. The resolution required from the lens would be around 285 lp/mm.

Using Rayleigh's derived laws for diffraction limitation in respect of green light, a lens, diffraction limited at f8 for example, will produce a resolution of 200 lp/mm at just 9% MTF.

That's a very weak signal. My concept is, the system would be 'tuned' so that such weak signals would be generally above the threshold for a given lighting conditions; an ISO setting, if you like. Those that aren't above the threshold are rendered as black, those that are above the threshold, are either red, blue or green, or all three for white. Nothing wrong with white is there?

At this resolution, the variation of signal strength would probably be less than + or - 150 photons that I've suggested. Of course, if you could have a sytem that could respond to + or - a single photon, then that would be the ultimate.
Title: larger sensors
Post by: John Sheehy on January 26, 2007, 11:35:47 pm
Quote
Ah! I see now you have misunderstood my concept. You are treating the superpixel as though it's a separate physical entity receiving signals from the 16 sub-pixels.

I see the superpixel as the binning of the subpixels.  Otherwise, it would not make any sense to even refer to the superpixel, if the final data has unique information about each subpixel.

Quote
The 7 micron pixel is in fact just a collection of 16 sub-pixels each with its own values. These values (all 48 of them, since they are Foveon type sub-pixels) are read by (or passed on to) the in-camera computer and the information is 'summed, averaged, analyzed etc' within, say a 64 bit system in order to assign a value to each 'virtual' superpixel, which is the pixel you will see on your monitor.

That's exactly what I thought you meant.  I really don't see how you think that there is anything worthy of 64-bit data here.  If the superpixel contains a single value for each of red, green, and blue, then it only has 16 possible levels for each.  I don't see where you're getting these delusions of grand detail from.  There are 16 red possibilities times 16 green possibilities times 16 blue possibilities; only 4096 color possibilities for each large 7u superpixel; a recipe for posterization.

Quote
In the particular example of 16 sub-pixels of 3 layers, there are a possible 8 to the power of 16 values (at least you agree with the maths).

I only agree as to what the result of 8^16 is; its application here only applies to the number of possible unique states for the 4x4 arrays of subpixels, *BEFORE* they are turned into superpixels.  These unique states are *NOT* unique superpixel states.

Quote
If we imagined the highly implausible situation where there are as many 7 micron 'virtual' pixels as there are combinations of these 48 sub-pixel elements, then each image theoretically could contain 2.8 thousand trillion different colors, if the computing power was there. The numbers are unrealistically large of course, but it's the principle I'm trying to get across.

Any discussion of whether or not you'll ever use all the colors available is ridiculous, IMO, and has absolutely nothing to do with the reasons for using higher bit depths.  Higher bit depths are for increased accuracy, and nothing else.  In any event, your superpixels only come in 4096 varieties; not 2.81x10^14.

Quote
My first example used a group of 9x2 micron sub-pixels giving a more realistic possible 134 million values

Try 729 possible values.

Quote
which could theoretically be 'assigned' to each of the, say 134m 6 micron 'virtual' superpixels on a full frame MF sensor.
You would only have black for all sub-pixels if the entire image were black. But I gues you mean, if a small group of adjacent sub-pixels did not receive a sufficiently strong signal to turn on any of the colors, then the rendering would be black. Yes, of course it would. What's wrong with that? Black is a necessary component of photographs. If there's a black speck on a white background, then it has to be black. What else should it be?

The speck is most likely dark grey, not black.

Quote
Simply not true. If I point my all-digital camera at a red flower petal, then most of the red sub-pixel elements will be switched on

Then there won't be any highlight detail in the red channel.

Quote
but relatvely few of the green and blue because the blue and green signals are relatively weak. Many such blue and green signals will be below the noise threshhold.

That does not sound like anything that is going to give accurate color or even luminosity.

Quote
There's probably not much point in my continuing to address each of your objections stated in your previous post because they are really based on a misunderstanding of the concept. Hope I've cleared this up   .
[a href=\"index.php?act=findpost&pid=97714\"][{POST_SNAPBACK}][/a]

No, you haven't changed my perception of your idea at all.  You have repeated the same technological and mathematical fantasies as before, AFAICT.  You are trying to discard every basic principle of maintaining image quality to save your idea.
Title: larger sensors
Post by: Ray on January 27, 2007, 06:29:25 am
Quote
I see the superpixel as the binning of the subpixels.  Otherwise, it would not make any sense to even refer to the superpixel, if the final data has unique information about each subpixel.

John,
This is where you have misunderstood the concept. You are still thinking analog. The discrete values of the analog pixels that are binned are not combined in different variations. They are simply voltages that are added to provide one larger voltage or value.

Quote
If the superpixel contains a single value for each of red, green, and blue, then it only has 16 possible levels for each.

Wrong. The super pixel doesn't contain any values because it's not analog. It's a virtual pixel that is 'assigned' a value from the many combinations of the 48 (on/off) diodes that comprise it.

Your notion that there are only 16 possible values of red because there are only 16 red diodes within the superpixel is again analog think.

It'll probably get a bit tedious, but I'll try to go through some of the possible values for just one red sub-pixel. We'll name the pixels 1 to 16 and call the individual elements (48 of them) diodes.

These are some of the possible different values that a single red pixel can have, call it R1, a red pixel consisting of a red diode turned on and the blue and green diodes (belonging to that pixel) turned off.

(1) R1 + 15 pixels off (black). This would be the darkest red possible. You wouldn't really be able to distinguish it from black. The value is one red diode turned on and all other 47 diodes turned off.

(2) R1 + 14 pixels off (black) + 1 pixel on (white) just a shade lighter.

(3) R1 + 13 pixels off (black) + 2 pixels on (white) another shade lighter.

(16) skip a few, R1 + 15 pixels on (white).

There we already have 16 different values for just one red sub-pixel.

We can repeat the same process for two red pixels.

(17) (R1 + R2) + 14 pixels off (black)

(18) (R1 + R2) + 12 pixels off + 2 pixels on

and repeat the process for 3 red pixels, and 4 red pixels and so on.

(19) (R1 + R2 + R3) + 13 pixels off..... (plus 12 pixels off and one on, plus 11 pixels off and 2 on, etc etc etc.

The lightest shade of red possible in such a system would be one red sub-pixel on  (red diode on, blue and green off) plus all other pixels on (white). If we were looking at sub-pixels on the monitor at great magnification, this would look like a tiny speck of red on a larger white dot. The speck could be in the middle of the white dot or at the edge, it doesn't matter because we are actually only looking at a single much larger pixel (7 micron) which has been assigned a value of red which is almost white.

I hope this is now as clear to you as it is to me   .
Title: larger sensors
Post by: John Sheehy on January 27, 2007, 12:45:04 pm
Quote
John,
This is where you have misunderstood the concept. You are still thinking analog. The discrete values of the analog pixels that are binned are not combined in different variations. They are simply voltages that are added to provide one larger voltage or value.
[a href=\"index.php?act=findpost&pid=97753\"][{POST_SNAPBACK}][/a]

I started replying point by point to your post, but I have decided that it would be a total waste of time, a growing snowball of confusion.  You don't seem capable of engaging in a focused argument, always going off on tangents.  Every time you allegedly clarify my alleged misconception, you say exactly what I already thought you were saying.  The word "Value" applies to both analog and digital numbers.

Ray, answer this one simple question ... what is it that you expect to be outputted in the RAW data ... what is the *exact* format of your RAW data; what is it supposed to contain?

I can only see two things that you might be trying to accomplish:

1) Package each 4x4 subimage and call it a "super-pixel" (a totally semantic-oriented approach with no practical IQ value whatsoever), or

2) You are trying to use these subpixels to create a single super-pixel, which will have three DIGITAL VALUES, one each for red, green, and blue luminance.  In this case there are only 17^3 or 4913 possible DIGITAL RGB VALUES for the full super-pixel, as there are only 17 possible states of each color (not 16 as mistakenly implied earlier) within each superpixel.

It seems that it is #2 that you are trying to accomplish, at times, and at other times you seem to be implying #1, especially with your 2.81x10^14 figure, which only has (purely semantic) application in #1.

Furthermore, any single-threshold-based (1-bit) capture is only efficient when there is less than a 50% chance of capture of a *SINGLE* photon in the brightest highlights, which would either mean a much finer pixel pitch, or an extremely low quantum efficiency.  Having some value like 500 or 800 photons will *increase* either noise or posterization, depending on the amount of light.  You can't avoid noise and posterization with thresholds and truncation.  You only make matters worse.
Title: larger sensors
Post by: Ray on January 27, 2007, 07:45:58 pm
Quote
I started replying point by point to your post, but I have decided that it would be a total waste of time, a growing snowball of confusion.  You don't seem capable of engaging in a focused argument, always going off on tangents.

I think you are beiginning to bluster, John. I threw in this idea as a possible solution to the the current noisy and resolution limited analog imaging devices. I realise the numbers and computing power required are too great for such a system to be practical at present and that simply because we can now manufacture pixels with a pitch of 1.75 microns doesn't mean that we can manufacture foveon type pixels of the same pixel pitch, or that we can spread hundreds of millions of them over large sensors, without enormous expense at least.

The practical difficulties of constructing such a system are for the engineer. What I was trying to elicit from you are valid objections to the theory on the grounds it  might not be mathematically sound, for example, or that it might contravene the laws of physics.

Now, some of your objections are valid. I think it really would be impossible to build such a system with a 'one photon' accuracy. You'd probably need a liquid-nitrogen-cooled camera the size of a house to achieve that. So there is obviously some noise in my system as envisaged, if one defines noise as anything less than absolute accuracy, so clearly my claims of 'switch the pixel on for total accuracy' are meant to be taken as rhetoric. (Did you see this sign   after such statements?)

Clearly, there's going to be noise at the threshold. If we take my example of the minimum value of red, one red diode in a cluster 48 diodes 47 of which are turned off; the reason that the one red diode is turned on, is likely due to noise of one sort or another. At stronger signal levels, there's a possibliliy of two adjacent red pixels both being turned on by slightly different signal strengths. R1 is turned on by say, a 500 photon signal and R2 is turned on by say, a 550 photon signal. There's no distinction between the two values and that represents another inaccuracy in the system. However, it's not possible for R1 to R16 to all be switched on by signals varying significantly, from say 500 photons to 5,000 photons, because our 35mm lens cannot transmit such intensities over such a small area of 1.75 microns in diameter. If it were able to, then such a lens would have an MTF response of 90% at 200 lp/mm, which is clearly impossible for a 35mm lens.

Quote
I can only see two things that you might be trying to accomplish:

1) Package each 4x4 subimage and call it a "super-pixel" (a totally semantic-oriented approach with no practical IQ value whatsoever), or

What! You merely object to the name? Then give it another name. I've tried to clarify things by calling it a 'virtual' pixel. The virtual pixel, as seen on the monitor, represents a summation of a complex analysis of the many, many different values one can get from all the possible variations of 16x3 RGB photodiodes. The virtual pixel does not exisit on the sensor. All that exists on the sensor are millions of on/off switches that are activated by a certain level of photonic signal, tuned as precisely as possible to the resolution limits of the lens. As I amplify on my theory, I now see that such a system would work best with lens and sensor designed as an integrated system and there would probably need to be some very sophisticated DXO Optics type of correction built in.

Quote
2) You are trying to use these subpixels to create a single super-pixel, which will have three DIGITAL VALUES, one each for red, green, and blue luminance.  In this case there are only 17^3 or 4913 possible DIGITAL RGB VALUES for the full super-pixel, as there are only 17 possible states of each color (not 16 as mistakenly implied earlier) within each superpixel.

So, when I tried in my previous post to enumerate the possible values of just 1 of the 16 red pixels, indicating clearly I thought, that there would be far more than 16 or 17 different possible values, I just wasted my time did I?

Now, I admit that maths is not my strong point. I can't say for sure that there will be 16^3 (4096) possible values of red, because there might be some duplication of values there, and I suspect there is some duplication of values for the total number of colours in such a 16 pixel array (2.8 thousand trillion   ). Perhaps a mathematician can help out here.

Quote
Ray, answer this one simple question ... what is it that you expect to be outputted in the RAW data ... what is the *exact* format of your RAW data; what is it supposed to contain?

I'll try and make it as graphic as possible how I imagine the values derived from an analysis of the 16 pixel array would be assigned to the virtual pixel. I'll assume that we have 4096 possible values of red, but I'm not certain about this.

(1) The palest shade of red will consist of one (fully saturated, as they all are) red pixel plus 15 white pixels (on the sensor). For the red element of our virtual pixel, we assign a number out of 4096.

(2) Slightly darker than the palest shade of red, we have one red pixel, 14 white pixels and one black pixel. We assign another number to the red element of the virtual pixel. What number should it be? I don't know. I thought perhaps you might?  

I assume that all 16 red pixels turned on (meaning that all green and blue pixels are switched off) results in the most saturated red the system can achieve and that this would be assigned the number 4096.

If you consider this a waste of time, that's fine by me. I can sense you are getting rather irritated.
Title: larger sensors
Post by: John Sheehy on January 28, 2007, 11:23:45 am
Quote
I think you are beiginning to bluster, John. I threw in this idea as a possible solution to the the current noisy and resolution limited analog imaging devices.

I still have no clear picture of what your idea is.  You seem to be relying on telepathy to communicate your idea.  All I can surmise is that you're doing something with small, 1-bit-per-color pixels, and combining 16 of them into one channel of a super-pixel.  The only reason I can think of to have a super-pixel is to make an output pixel that represents the sum of all light registered.  In that case, it doesn't matter which 1-bit subpixels registered a hit, the only thing that matters is how many of them registered a hit.  The list of all possible results is

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

for each color value of each super-pixel.  With 17 possible values for each color of the superpixels, there are 17^3 or 4913 possible RGB values for each superpixel (as opposed to the 2^36 or 68,719,476,736 RGB values possible for a real-world Foveon pixel in a Sigma SD9 or SD10, which has just a slightly larger pixel pitch than your example's superpixel.

Quote
I realise the numbers and computing power required are too great for such a system to be practical at present and that simply because we can now manufacture pixels with a pitch of 1.75 microns doesn't mean that we can manufacture foveon type pixels of the same pixel pitch, or that we can spread hundreds of millions of them over large sensors, without enormous expense at least.

The biggest clear problem with your system is that you are not collecting enough information at each subpixel, and even at the superpixel.  A 1.75u pixel pitch covers far too large an area to measure a single threshold (1 bit hit).  The shot noise is going to be the same as if it only took one photon to register a hit, if the hit rate is about 50%.  It is dependent on *ONE* single photon to cross the threshold, or not cross it, for each subpixel.  That is worse than if the there were no subpixels, and the superpixel (which is a real pixel now) only recorded 17 levels per color channel, because a fairly uniform level of light across all 16 subpixels could fail to register any hits at all, and just 1/4 stop more could have all of them registering; IOW, 0 and 16 could be just 1/4 stop apart.

Quote
The practical difficulties of constructing such a system are for the engineer. What I was trying to elicit from you are valid objections to the theory on the grounds it  might not be mathematically sound, for example, or that it might contravene the laws of physics.

Your idea, as I understand it, does not collect useful information.  A 1-bit hit at a high number of photons is only useful for high-contrast copying, like text, and contains no detail anywhere except in the tonal range around the threshold.

Quote
Now, some of your objections are valid. I think it really would be impossible to build such a system with a 'one photon' accuracy. You'd probably need a liquid-nitrogen-cooled camera the size of a house to achieve that. So there is obviously some noise in my system as envisaged, if one defines noise as anything less than absolute accuracy, so clearly my claims of 'switch the pixel on for total accuracy' are meant to be taken as rhetoric. (Did you see this sign   after such statements?)

If you don't think your idea will reduce noise, or improve on current technology in some way, then why are you even mentioning it?  What is the purpose of your idea?  You still haven't made it clear if you are recording the exact 4x4 arrays in your output and just giving them a purely semantic name of "superpixel", or if you are actually counting the number of hits within it.  I asked you to make a choice, made it clear that it was very important that you answer this in order for me to understand what your idea actually is, and you defended both of them!

Quote
Clearly, there's going to be noise at the threshold. If we take my example of the minimum value of red, one red diode in a cluster 48 diodes 47 of which are turned off; the reason that the one red diode is turned on, is likely due to noise of one sort or another.  At stronger signal levels, there's a possibliliy of two adjacent red pixels both being turned on by slightly different signal strengths. R1 is turned on by say, a 500 photon signal and R2 is turned on by say, a 550 photon signal. There's no distinction between the two values and that represents another inaccuracy in the system. However, it's not possible for R1 to R16 to all be switched on by signals varying significantly, from say 500 photons to 5,000 photons,

That's a shame, because, really, that is the only way you're going to get any tonality out of such a system.  Here's what you get with a system like yours (noise in middle half, thresholding in lower half):

(http://www.pbase.com/jps_photo/image/73656057/original.jpg)

Quote
because our 35mm lens cannot transmit such intensities over such a small area of 1.75 microns in diameter.  If it were able to, then such a lens would have an MTF response of 90% at 200 lp/mm, which is clearly impossible for a 35mm lens.

I am not even going to *begin* to figure out what you think that lens MTF has to do with this context (thresholding).

Quote
What! You merely object to the name? Then give it another name.

No.  I object to the fact that all that you'd be doing is giving it a name.  You're not changing anything over having 16x as many pixels, 1/16th the size, if the condition I mentioned were true of your intent (I asked you to choose between #1 and #2, and you defended them both).

Quote
  I've tried to clarify things by calling it a 'virtual' pixel. The virtual pixel, as seen on the monitor, represents a summation of a complex analysis of the many, many different values one can get from all the possible variations of 16x3 RGB photodiodes.

You can't have a meaningful complex analysis of such coarse data.  The data your system collects is garbage.  You're recording a single threshold hit for hundreds or thousands of photons.  That results in garbage collection, and nothing more.
Title: larger sensors
Post by: John Sheehy on January 28, 2007, 11:25:39 am
Quote
The virtual pixel does not exisit on the sensor. All that exists on the sensor are millions of on/off switches that are activated by a certain level of photonic signal, tuned as precisely as possible to the resolution limits of the lens.

That doesn't make any sense, whatsoever.  The resolution of the lens is only worth considering for pixel-pitch, not for thresholds.  And again, all these thresholds can do is posterize.  

Quote
As I amplify on my theory, I now see that such a system would work best with lens and sensor designed as an integrated system and there would probably need to be some very sophisticated DXO Optics type of correction built in.
So, when I tried in my previous post to enumerate the possible values of just 1 of the 16 red pixels, indicating clearly I thought, that there would be far more than 16 or 17 different possible values, I just wasted my time did I?

I really can't answer that Ray, because, even though I've asked you quite clearly, several times now, exactly what you are recording and interested in, you have failed to give a clear response every single time.  I am well aware of how many possible results there are within each superpixel; I didn't need your enumeration.  Simply stating that each superpixel's internal detail is expressed by a 48-bit number tells all that.  If you are interested in *HOW MUCH* light hits the superpixel, in each color, then there are only 17 levels per channel, or 4913 possible RGB values.  If you want to record, in the RAW file, the exact patterns of hits within the superpixels, then there are 2.81x10^14 possible superpixels, but that big number is not really impressive when you think about what it really means; that the possible number of 16-pixel, 1-bit per channel RGB images possible.  It's still just a 1-bit-per-channel 4x4 pixel image.  Now, if your idea is to perform some processing on the superpixel unique data, to write a "better" super-pixel out to RAW than you can get with just counting the number of hits within each superpixel, then you have yet to give even a clue of what you think the system can do; the idea, so far, would be akin to asking a Genie wish.  And frankly, with a single threshold at hundreds or thousands of photons, you're going to need a Genie because the data is not worth analyzing for anything but high-contrast line-copying.  

Quote
Now, I admit that maths is not my strong point. I can't say for sure that there will be 16^3 (4096) possible values of red, because there might be some duplication of values there, and I suspect there is some duplication of values for the total number of colours in such a 16 pixel array (2.8 thousand trillion   ). Perhaps a mathematician can help out here.

I won't enumerate all the possiblilities for 16 photobits, so lets just say there are 4, for the sake of argument.  There are, then, 16 possible states, each with 0 to 4 1s or "hits":

0000  0
0001  1
0010  1
0011  2
0100  1
0101  2
0110  2
0111  3
1000  1
1001  2
1010  2
1011  3
1100  2
1101  3
1110  3
1111 4

So, enumerating the number of states that give each possible # of hits, we get:

hits  occurence
0            1
1            4
2            6
3            4
4            1

As you can see, hit rates in the 50% range account for a far higher percentage of possible states than the ones at or near 0% and 100%; this is even more dramatic with higher numbers of possible states.

Quote
I'll try and make it as graphic as possible how I imagine the values derived from an analysis of the 16 pixel array would be assigned to the virtual pixel. I'll assume that we have 4096 possible values of red, but I'm not certain about this.

(1) The palest shade of red will consist of one (fully saturated, as they all are) red pixel plus 15 white pixels (on the sensor). For the red element of our virtual pixel, we assign a number out of 4096.

"We assign"?  What exactly is that supposed to mean?  You're hiding your whole "complex analysys" inside this magic box called "we assign".  Or is there really any "complex analysis" at all before you write the super-pixel to the RAW file?  This is why it is so difficult trying to have this conversation with you.

If you're just counting the hits within the superpixel, which some of your language suggests is the case (but your reference to "complex analysis" seems to contradict), then the state of the red pixels is most efficiently stated with a number between 0 and 16 (17 levels).  As an RGB value, the superpixel you describe would be 16,15,15.  You could scale these values so that 16 was 4095, or 255, but why?  All that is needed to be known by the RAW converter is that there are 17 levels.

Quote
(2) Slightly darker than the palest shade of red, we have one red pixel, 14 white pixels and one black pixel. We assign another number to the red element of the virtual pixel. What number should it be? I don't know. I thought perhaps you might? 

I can't read your mind, Ray, and you have yet to even give a hint of what your "complex analysis" might entail.  To the best of my understanding, you have "assigned" what should be an RGB value of 16 in #1 above, the value of 4096.  4096 is inflated, and even moreso when you realize that it takes 1 more bit (13 bits) to express the value 4096 than 4095.

Quote
I assume that all 16 red pixels turned on (meaning that all green and blue pixels are switched off)

That doesn't mean that at all.  The red states do not affect the blue and green states; they are independent.

Quote
results in the most saturated red the system can achieve and that this would be assigned the number 4096.[a href=\"index.php?act=findpost&pid=97858\"][{POST_SNAPBACK}][/a]

Are you talking about color saturation or luminance saturation?  Talking about color saturation wouldn't make any sense here, unless your output data is going to be HSV or something similar.

There are only 17 levels to distinguish, so a value greater than 16 is just fluff.
Title: larger sensors
Post by: Ray on January 28, 2007, 06:51:55 pm
Quote
Are you talking about color saturation or luminance saturation?  Talking about color saturation wouldn't make any sense here, unless your output data is going to be HSV or something similar.

There are only 17 levels to distinguish, so a value greater than 16 is just fluff.
[a href=\"index.php?act=findpost&pid=97928\"][{POST_SNAPBACK}][/a]

John,
That's it. In some cockeyed way I've combined luminance values with saturation values to produce an inflated range of saturation levels. Just fluff as you say.

This idea goes in the bin. Let it never be said I'm too proud to admit I am wrong   .

To get a sufficient number of 'real' levels for each color, the virtual pixel would need to be much bigger than 6 or 7 microns. We'd need to use a huge sensor, and lenses for such a large format would not have sufficient resolution to make such tiny sub-pixels meaningful in any way. The idea is crap.

Thanks for your patience and time in sorting this out.

Buy you a beer if we ever meet   .