Luminous Landscape Forum

Equipment & Techniques => Digital Cameras & Shooting Techniques => Topic started by: Ray on June 20, 2007, 11:01:16 pm

Title: Kodak's new sensor
Post by: Ray on June 20, 2007, 11:01:16 pm
I noticed a news item on Rob Galbraith's home page. Here's the press release http://www.kodak.com/eknec/PageQuerier.jht...900688a80720f9d (http://www.kodak.com/eknec/PageQuerier.jhtml?pq-path=2709&pq-locale=en_US&gpcid=0900688a80720f9d)

Now, if Canon had access to this technology they could presumably give us a 1D MkIV with a genuine ISO 12800 and an optional ISO 25600. Wow!

There's a characteristic of Bayer type sensors which many of us probably don't give a thought to, and that is, they seem on the face of it to be rather inefficient light gatherers due to the presense of a color filter array which unavoidably has to filter out, or block, a good proportion of the light passing through the lens.

A red filter's purpose is to block out as much green and blue light as possible. There's obviously some overlap. However, if this new sensor from Kodak can improve sensitivity by one to two stops, then the implication is that current Bayer type sensors really are blocking out around 2/3rds of the light arriving at the sensor.
Title: Kodak's new sensor
Post by: jani on June 21, 2007, 05:27:14 am
Quote
Now, if Canon had access to this technology they could presumably give us a 1D MkIV with a genuine ISO 128,000 and an optional ISO 256,000. Wow!
The downside is that you need more sensor sites to represent the same amount of detail in colour photographs. So to match the resolution of the MkII, you'd need to do some fancy stepping around all the problems we've been whining about, in terms of multi-megapixel digicams ...

TANSTAFL.
Title: Kodak's new sensor
Post by: Ray on June 21, 2007, 07:59:18 pm
Quote
The downside is that you need more sensor sites to represent the same amount of detail in colour photographs. So to match the resolution of the MkII, you'd need to do some fancy stepping around all the problems we've been whining about, in terms of multi-megapixel digicams ...

TANSTAFL.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124102\")

Jani,
That's not my reading of the situation. Is it not the case that human vision is more receptive to luminance information than color information, regarding resolution?

Currently, in a Bayer type sensor, 50% of the pixels are green, 25% red and 25% blue. In the new Kodak sensor, one arrangement would (or could) be 50% panchromatic and 16.6% each for red, green and blue. Resolution would (or could) be the same, the pixel size would be the same, but for any given exposure the new filter array will allow more photons to pass through, hence greater sensitivity in low light situations and less noise.

Since noise affects dynamic range and since dynamic range is always reduced somewhat in current systems at high ISOs, we could expect a 10MP 1D MkIV with performance at ISO 12800 at least as good as current performance of the 1D MkIII at ISO 6400, on the basis of a one stop improvement in sensitivity, and as good at ISO 25600 as the 1D3 at ISO 6400 on the basis of a 2 stop improvement in sensitivity.

We might also expect a base ISO of 200-400. The only downside I see in such an arrangement is a possible reduction in color accuracy. Will the new algorithms be up to the job? I guess we'll have to wait and see.

Here's a link that provides more information. [a href=\"http://johncompton.1000nerds.kodak.com/default.asp?item=624876]http://johncompton.1000nerds.kodak.com/def...asp?item=624876[/url]
Title: Kodak's new sensor
Post by: jani on June 22, 2007, 06:33:05 am
Quote
Jani,
That's not my reading of the situation. Is it not the case that human vision is more receptive to luminance information than color information, regarding resolution?
Ray,

The human vision is at its most discerning regarding detail in a very narrow area, which is covered mostly by cones (http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.html) (the colour-sensitive cells).

A simple experiment you can perform yourself:

Walk outside at a clear, star-lit night, no other light sources than stars. Find a star that is nearly impossible to see head-on. Then gradually turn your eyes away from that star, and you will notice that it appears to become brighter.

This is because the light from the star then is detected by the more light-sensitive rods. Unfortunately, you'll also notice a lack of detail.
Title: Kodak's new sensor
Post by: Ray on June 22, 2007, 09:46:45 am
Quote
Ray,

The human vision is at its most discerning regarding detail in a very narrow area, which is covered mostly by cones (http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.html) (the colour-sensitive cells).

A simple experiment you can perform yourself:

Walk outside at a clear, star-lit night, no other light sources than stars. Find a star that is nearly impossible to see head-on. Then gradually turn your eyes away from that star, and you will notice that it appears to become brighter.

This is because the light from the star then is detected by the more light-sensitive rods. Unfortunately, you'll also notice a lack of detail.
[a href=\"index.php?act=findpost&pid=124325\"][{POST_SNAPBACK}][/a]

Exactly! Jani. That's the inspiration for Kodak's improvement of the Bayer type sensor.

Here's a relevant quote from the article you provided a link to.

Quote
The retina contains two types of photoreceptors, rods and cones. The rods are more numerous, some 120 million, and are more sensitive than the cones. However, they are not sensitive to color. The 6 to 7 million cones provide the eye's color sensitivity and they are much more concentrated in the central yellow spot known as the macula.

The point here is the rods are more numerous (much more numerous) and are more sensitive, but not to color.

The rods are equivalent to the panchromatic pixels of the new Kodak sensor. Seems like a sound idea to me.

Here's a picture of the rods and cones.

[attachment=2665:attachment]
Title: Kodak's new sensor
Post by: Ray on June 23, 2007, 01:05:57 am
Having had further thoughts about this new color filter array, (and I suppose it's not so much a new sensor as a new way of filtering the light), I'm wondering if the claimed 1-2 stop increase in sensitivity is an exaggeration. All the patterns I've seen consist of just half of the number of pixels, in total, having the color filter removed, ie. becoming panchromatic.

If one calculates on the basis that each filter covering each pixel in the Bayer type array filters out 2/3rds of the light, then removing color filters from half of the sensors should cause only 1/3rd of the light to be blocked, and that represents a one stop improvement in sensitivity. So how can we get up to two stops improvement? Is Kodak referring to the variability of scene content or sensor design, or both? For example, with the current Bayer type sensor, a scene that is predominantly green will be less noisy at high ISO than a scene that is predominantly red and blue.

If we take an average of the 1-2 stop claim and call it a 1.5 stop improvement in noise, then, if we were to remove the 'color filter array' entirely, we would get, on average, a 3 stop improvement in noise. We would have an extremely low noise B&W digital camera.

It seems to be a fact of life with modern technological products, that one doesn't hear much about the deficiencies of a particular design untill someone discovers a better way of doing things, then, in order to sell the new product, the deficiencies of the old product come to the fore and are widely publicised.

The new Kodak CFA has brought to my attention the possibilities of truly B&W digital photography, which have of course always existed irrespective of this new Kodak sensor. I'm already salivating after applying some simple maths to the situation.

We all know that Foveon type sensors produce higher resolution than Bayer type sensors, pixel for pixel, defining a pixel as a group of one red, blue and green element. This is due to the loss of resolution in the demosaicing and interpolation that takes place with the Bayer type sensor, as well as the presence of an AA filter. Without quibbling, 3.4m Foveon pixels are roughly equivalent to 6m Bayer pixels. This represents a 1.76 increase in resolution, pixel for pixel. Jonathan Wienke claims a 1.5x increase in resolution. Let's compromise on a 1.6x increase.

Now I'm going to propose something that I'm not 100% certain about, but which I think might quite probably be true. A cheap camera like the 10mp Canon 400D could deliver B&W images that could exceed the quality of B&W images from the 1Ds2, if its color filter array and AA filter were removed. In other words, without the demosaicing and interpolation, 10m panchromatic pixels would at least equal, in terms of resolution and luminance, 16m color pixels converted to B&W.

Furthermore, after taking into consideration the 3 stop advantage in noise and sensitivy that would result from removing the CFA and AA filter, a 10mp 400D might well wipe the floor with the 1Ds2 (for B&W images only, of course).

Consider the options that are available with a B&W-only 400D. Not only would we have the resolution of a 1Ds2 color image converted to B&W, but we'd have a usable ISO 25600, as noise-free as ISO 1600 on the current 400D.

Let's re-arrange the possibilities. Instead of going for maximum performance at unheard of ISOs, we could use that advantage to increase pixel count whilst still maintaining the same S/N on a pixel per pixel basis, compared with color filtered pixels. In other words, if we accept the current level of noise at ISO 1600 with a CFA sensor as being reasonable and useful, we can make smaller photodiodes with the same signal-to-noise performance, if they are panchromatic.

How about a 40mp 400D, B&W only, with the same low noise at ISO 1600? Could this be the highest resolving digital camera ever (apart from scanning backs)? Higher resolving than the P45, and all for a cost of $1,000-2,000?
Title: Kodak's new sensor
Post by: Ray on June 23, 2007, 01:21:26 am
Quote
Walk outside at a clear, star-lit night, no other light sources than stars. Find a star that is nearly impossible to see head-on. Then gradually turn your eyes away from that star, and you will notice that it appears to become brighter.

This is because the light from the star then is detected by the more light-sensitive rods. Unfortunately, you'll also notice a lack of detail.
[a href=\"index.php?act=findpost&pid=124325\"][{POST_SNAPBACK}][/a]

Jani,
I can't say I've ever noticed any detail in a star. What sort of detail are you referring to ?  
Title: Kodak's new sensor
Post by: orin on June 23, 2007, 02:38:02 am
The analogy between this new filter array and the retina is strained at best.  Although it is true that rods are more sensitive and more numerous than cones, they are also, in silicon sensor terms, highly binned.  Thus their sensitivity comes at the expense of spatial resolution.  In addition, rods saturate at low light levels and contribute nothing to vision at moderate to high light levels.  The panchromatic pixels on the other hand operate at all light levels and have good spatial resolution.

The new filter array trades chromatic resolution for a modest increase of sensitivity.  This probably reduces shadow noise a bit, but the increased spacing of the filtered pixels might also increase the visibility of chromatic aliasing and force an increase in the strength of the antialiasing filter.  The new design probably has its uses, but it doesn't seem to warrant the amount of press it has been getting.
Title: Kodak's new sensor
Post by: Ray on June 23, 2007, 08:26:27 am
Quote
The analogy between this new filter array and the retina is strained at best.  Although it is true that rods are more sensitive and more numerous than cones, they are also, in silicon sensor terms, highly binned.  Thus their sensitivity comes at the expense of spatial resolution. [a href=\"index.php?act=findpost&pid=124476\"][{POST_SNAPBACK}][/a]


Orin,
That may well be true. The analogy might be strained but in this respect the panchromatic pixels have the advantage because they do have full resolution capability. There was also some talk of binning options with the new Kodak sensor.
 
Nevertheless, it is an unavoidable fact that having a primary color filter in front of each pixel blocks out about 2/3rds of the light heading towards the sensor. With existing algorithms one would expect poorer color resolution if some of these filters are removed, but Kodak's design includes new algorithms to interpolate the color information. It remains to be seen how effective these new algorithms will be. If they are not effective then that will be a major flaw in the design.

At this stage the only people who know whether or not the new algorithms will be effective is Kodak. Perhaps one has to give the designers credit for not being complete fools.

Quote
This probably reduces shadow noise a bit, but the increased spacing of the filtered pixels might also increase the visibility of chromatic aliasing and force an increase in the strength of the antialiasing filter.

I would have thought just the opposite. If the color filtered pixels have wider spacing then chromatic aliasing should be less of a problem.
Title: Kodak's new sensor
Post by: jani on June 23, 2007, 04:05:09 pm
Quote
I would have thought just the opposite. If the color filtered pixels have wider spacing then chromatic aliasing should be less of a problem.
Perhaps not chromatic aliasing, but certainly chromatic aberrations at multi-pixel levels.

Remember, the colours need to be interpolated from the surrounding pixels.

Of course, you can use diffraction effects to make some fairly decent assumptions about this, but this is also -- as you were on to regarding Bayer vs. Foveon -- something that will result in a per-pixel loss of resolution.  I merely make the claim that there will be a loss of colour resolution, with a possibility of loss of absolute resolution in colour imagery.
Title: Kodak's new sensor
Post by: jani on June 23, 2007, 04:07:21 pm
Quote
Jani,
I can't say I've ever noticed any detail in a star. What sort of detail are you referring to ? 
You might, for instance, notice that some stars are not single stars, but rather two, one partially obscured by the other.
Title: Kodak's new sensor
Post by: Ray on June 23, 2007, 07:29:08 pm
Quote
You might, for instance, notice that some stars are not single stars, but rather two, one partially obscured by the other.
[a href=\"index.php?act=findpost&pid=124587\"][{POST_SNAPBACK}][/a]

Wow! Your eyesight is good.  

I could be wrong, but I thought double stars could only be identified as such through a telescope. To the naked eye they just look like rather bright single stars.
Title: Kodak's new sensor
Post by: Ray on June 23, 2007, 07:59:03 pm
Quote
Perhaps not chromatic aliasing, but certainly chromatic aberrations at multi-pixel levels.

Remember, the colours need to be interpolated from the surrounding pixels.

Of course, you can use diffraction effects to make some fairly decent assumptions about this, but this is also -- as you were on to regarding Bayer vs. Foveon -- something that will result in a per-pixel loss of resolution.  I merely make the claim that there will be a loss of colour resolution, with a possibility of loss of absolute resolution in colour imagery.
[a href=\"index.php?act=findpost&pid=124586\"][{POST_SNAPBACK}][/a]

Jani,
Those are fair points but you seem to have ignored this inexorable progression towards greater pixel count.

Remember, Kodak 14n owners seemed to get by with no AA filter and that sensor had a pixel density less than that of the D60.

Some of the patterns of this new CFA containing 50% panchromatic pixels (no filter) allow for each panchromatic pixel to be adjacent to all 3 primaries at either the edges or the corners. With improved algorithms and increased pixel count, there should not be an insurmountable problem here in making a reasonably accurate guess.

As discussed in other threads, as one increases pixel count there's a potential trade-off in signal-to-noise at the pixel level. Read noise becomes a greater proportion of the total signal. Without some compensating technology, total image noise could be worse, especially at high ISOs and in shadows.

This new Kodak CFA could be the compensating technology. If the 1-2 stop improvement in noise is no exaggeration, it should be possible to increase pixel count dramatically so that the sensor really does outresolve the lens, thus removing the need for an AA filter as well as providing sufficient color information, as well as maintaining and perhaps even improving upon current noise performance.
Title: Kodak's new sensor
Post by: jani on June 24, 2007, 06:48:01 am
Quote
Wow! Your eyesight is good.   

I could be wrong, but I thought double stars could only be identified as such through a telescope. To the naked eye they just look like rather bright single stars.
By using "double star" I was trying to disambiguate from "binary star", which appears to be your interpretation.

A "double star" is either a true or merely an apparent, "optical binary" star.

For the sake of the argument, the difference is irrelevant.

If you want to test your eyesight, see if you can separate Mizar and Alcor from eachother.

Mizar is the second star in the handle of the big dipper (Ursa Major).
Title: Kodak's new sensor
Post by: jani on June 24, 2007, 07:00:05 am
Quote
Those are fair points but you seem to have ignored this inexorable progression towards greater pixel count.
I'm sorry, I haven't ignored that.

That "inexorable progression" also favours the Bayer patterns.

For Kodak's new pattern to retain the same level of colour detail as a Bayer pattern, you have to discount the possibility of any technological improvement to benefit Bayer patterns.

See also the article in Kodak's A Thousand Nerds (http://johncompton.1000nerds.kodak.com/default.asp?item=624876) blog.

Quote
Do you get a more detailed image by using panchromatic pixels?

JH: Not really. Image detail comes primarily from the luminance channel of the image. In a Bayer pattern sensor, half of the total pixels are arranged in a green checkerboard and are used for luminance. In these new designs, half of the total pixels are arranged in a panchromatic checkerboard and used for luminance. We still have the same amount of luminance data - we're just getting it with higher sensitivity than before.
Title: Kodak's new sensor
Post by: Ray on June 24, 2007, 11:57:30 am
Quote
That "inexorable progression" also favours the Bayer patterns.

Jani,
How so? I can't assert I'm absolutely right, but my understanding is that the smaller the pixels the more of a problem that noise becomes, both read noise and photonic noise.

We know from Canon's record that they are not interested in going backwards on the noise front and I think it's unlikely they'll be standing still regards pixel count.

A 1-2 stop noise advantage resulting from a different CFA allows for smaller pixels without sacrificing the high standards of low noise at high ISO that we all now expect. It's a way forward. But of course, if those new algorithms Kodak is talking about are not up to the job and the accuracy of the color were to suffer compared with the old Bayer type, there would be a problem.

Quote
For Kodak's new pattern to retain the same level of colour detail as a Bayer pattern, you have to discount the possibility of any technological improvement to benefit Bayer patterns.

Quite so. But we can't comment on such improvements to the Bayer type sensor if they haven't been announced. I imagine that Kodak will be licensing such technology to those prepared to pay.

I see a fundamental inefficiency of the Bayer type sensor. Two thirds of the light reaching the sensor is wasted. Now that's just awful, isn't it? If that was oil being wasted, we'd do something about it, wouldn't we?  

Quote
See also the article in Kodak's A Thousand Nerds (http://johncompton.1000nerds.kodak.com/default.asp?item=624876) blog.

Well, I agree with that statement for images taken in good light. There no reason for panchromatic pixels to produce a sharper color image because the demosaicing and interpolation still applies and it is this process, primarily, that prevents resolution reaching the Nyquist limit. It would be different for a B&W camera though, which I believe would produce sharper images with the same number of pixels, if they were all panchromatic.

However, there is a contradiction in the above quote you refer to. If noise is reduced (as a result of more photons reaching the panchromatic photoreceptors), then resolution in the shadows will surely increase. I've always found that to be the case, haven't you?
Title: Kodak's new sensor
Post by: mahleu on June 24, 2007, 12:03:02 pm
It would be interesting to see the development of a black and white only version of this sensor. I for one, would be interested in a black and white camera which functioned very well in low light.
Title: Kodak's new sensor
Post by: jani on June 24, 2007, 01:54:16 pm
Quote
Jani,
How so? I can't assert I'm absolutely right, but my understanding is that the smaller the pixels the more of a problem that noise becomes, both read noise and photonic noise.
Yes, and this problem also applies to the CFAv2.

What you're apparently claiming, is that what improvements in sensor technology that happen will benefit CFAv2, while they won't benefit the Bayer style arrays.

Why not?

Remember, this is just the colour filter above the sensor wells!

Quote
A 1-2 stop noise advantage resulting from a different CFA allows for smaller pixels without sacrificing the high standards of low noise at high ISO that we all now expect.
... except that for colour, there probably won't be an improvement, even when this is taken into account. At least according to Kodak themselves.

Quote
I see a fundamental inefficiency of the Bayer type sensor. Two thirds of the light reaching the sensor is wasted. Now that's just awful, isn't it? If that was oil being wasted, we'd do something about it, wouldn't we? 
And if you look at the CFAv2, you'll see that you need more pixels to reproduce the same colour information as in a Bayer array (read the blog ...). In other words, you could easily claim that a large proportion of the colour information is wasted with CFAv2.

Quote
Well, I agree with that statement for images taken in good light. There no reason for panchromatic pixels to produce a sharper color image because the demosaicing and interpolation still applies and it is this process, primarily, that prevents resolution reaching the Nyquist limit.
But the demosaicing is happening with far less colour information to go by.

Quote
It would be different for a B&W camera though, which I believe would produce sharper images with the same number of pixels, if they were all panchromatic.
Yes, I agree that this would be a great boon for B&W photography.

Quote
However, there is a contradiction in the above quote you refer to. If noise is reduced (as a result of more photons reaching the panchromatic photoreceptors), then resolution in the shadows will surely increase. I've always found that to be the case, haven't you?
It will be an improvement in the cases where the noise is so bad that the increased sensitivity really helps to retrieve image detail that would otherwise be obscured.

It does not mean, however, that it is a general per-pixel improvement. I don't see Kodak claiming that (in fact, the citation I quoted shows that they very clearly don't claim that), only that its greatest advantage is significantly less chromatic noise in low-light situations, and greater general sensitivity.

Kodak have not addressed what this does to the risk of blown highlights, so we don't really know if this will give a higher dynamic range.
Title: Kodak's new sensor
Post by: Ray on June 24, 2007, 10:32:37 pm
Quote
Yes, and this problem also applies to the CFAv2.

What you're apparently claiming, is that what improvements in sensor technology that happen will benefit CFAv2, while they won't benefit the Bayer style arrays.

Why not?

Remember, this is just the colour filter above the sensor wells!

I understand this new technology is principally a new CFA with a new way of demosaicing and interpolating the color information. It doesn't necessarily involve fundamental changes to the rest of the sensor technology (at least such fundamental changes are not mentioned), but I expect if Canon were able to buy a license to use this new CFA they would have to implement other changes in sensor design to make it work and make it work better than the current Bayer type. I can't see them buying a Kodak sensor.

For example, if you were to simply replace the Bayer CFA with CFAv2, on an exisiting sensor, then base ISO for half the pixels would jump to ISO 800 (because the panchromatic pixels are receiving 3x as much light with the same exposure), but the other half of the pixels with a color filter in front would never reach full well capacity and color accuracy would certainly suffer for this reason.

What changes to sensor design might be required to get around this problem might be fun to speculate upon.

Quote
And if you look at the CFAv2, you'll see that you need more pixels to reproduce the same colour information as in a Bayer array (read the blog ...). In other words, you could easily claim that a large proportion of the colour information is wasted with CFAv2.
But the demosaicing is happening with far less colour information to go by.

More pixels will be provided. As I mentioned before, this new CFA seems to lend itself very well to further increases in pixel count without compromising high ISO performance. Just a one stop improvement in sensitivity might allow for double the number of pixels on a given size sensor whilst maintaining the same signal-to-noise for each pixel and the same over-all dynamic range for the image as a whole.

The comparison of quantities of colored pixels in both designs is 16.6% red, blue and green for CFAv2 versus 25% red, blue and green for Bayer. (I've discounted the extra 25% of green because I believe this is for luminance purposes and I'm not sure how that contributes to over-all color accuracy).

If we now compare say a 20mp upgrade to the 400D, employing the new CFA, with the existing 10mp 400D, we could expect higher resolution from the 20MP camera without compromising dynamic range or high ISO performance. Agreed?

Even if lenses sometimes were not adequate to deliver that extra resolution, I expect we would still get some because I doubt that such a high density sensor would require an AA filter which has the effect of softening the image.

Lets compare color accuracy. The 10MP 400D has 2.5m red, blue and green pixels (plus 2.5m additional green for luminance purposes).

The new 20mp 400D has 3.3m red, blue and green pixels (plus 10m for luminance).

Comparing the final images, one has 2.5m items of red data and the other has 3.3m items of red data. Which is more accurate? Is color accuracy even going to be an issue with such high pixel density?

I know you could argue that a 20MP Bayer sensor would have 5m items of red data and that 5m is better than 3.3m, but that argument discounts the role of the new algorithms for the CFAv2.

The other issue which I think deserves more investigation is, "Just how much color information does the human eye require in order to get a realistic sense of accurate color in a scene?"

I'll mention just two observations which make me think it is far less than you suppose.

(1) During the transition from B&W TV to Color TV there was a problem regarding compatibility with old B&W sets. It was necessary to devise a color system so that the signal could be received by B&W sets which the majority of the population still owned. Without getting into technical details, the engineers devised a way of superimposing the color signal onto the existing luminance signal, which resulted in a modest increase in the bandwidth of the transmission from something like 4.5MHz to 5.5MHz.

The impression I get is you simply don't need as much color information as luminance information. The color can be filled in.

(2) Anyone who has scanned old slides must have been amazed at how successful computer algorithms can be in restoring faded color.

I've scanned slides that have been so faded that, when I first looked at them holding them to the light, I thought they were B&W. Now I'm not going to pretend that I got them looking as though they were taken yesterday, but the very small amount of color information still there was sufficient to enable a very surprising degree of restoration.

With slides that have undergone a more modest degree of color fading, there seems to be no problem in getting the colors looking perfect, as though the shot really was taken yesterday.

Kodak, can I please have a high paying job selling your new sensor design.  
Title: Kodak's new sensor
Post by: BernardLanguillier on June 24, 2007, 10:44:39 pm
This new technology is mostly aimed at producing compact digital with better high iso image quality isn't it?

If anything, the trend in high end digital is more towards capturing MORE color information and not LESS like in the new Kodak technology.

Consumers don't really care about colot accuracy and noise is perceived as a much bigger issue. Pros see things the opposite way and noise is in fact not so much of an issue for many applications.

Regards,
Bernard
Title: Kodak's new sensor
Post by: Ray on June 25, 2007, 12:03:22 am
Quote
This new technology is mostly aimed at producing compact digital with better high iso image quality isn't it?

If anything, the trend in high end digital is more towards capturing MORE color information and not LESS like in the new Kodak technology.

Consumers don't really care about colot accuracy and noise is perceived as a much bigger issue. Pros see things the opposite way and noise is in fact not so much of an issue for many applications.

Regards,
Bernard
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=124720\")

Bernard,
That's a good point and you might be right. When Fuji announced its dual pixel system of one small pixel of low sensitivity for the highlights and one larger, normal pixel for the rest of the image, both pixels under the same micro-lens, this system was first introduced in the P&S cameras and was much criticised for poor implementation of the design and only marginal improvement in DR.

The same thing might happen with the new Kodak sensor. It'll probably take time for the new system to iron out its problems. But what better way than to experiment on P&S cameras with less critical consumers?   (Sorry if I sound elitist.)

Having just read 'What's New' on LL, there's a link to Mike Johsons site and interview. [a href=\"http://theonlinephotographer.typepad.com/the_online_photographer/2007/06/a_brief_intervi.html]http://theonlinephotographer.typepad.com/t...ef_intervi.html[/url]

Here are some relevant quotes from the interview.

Quote
T.O.P.: The human eye puts an emphasis on luminance information for the sake of image detail. Is the new sensor likely to increase the level of real detail in digital images?

John Hamilton: Not really. The panchromatic pixels function just like the green pixels of the Bayer pattern except that they are photographically faster. However, under low light conditions, the new patterns will outperform Bayer because of improved signal-to-noise.

T.O.P.: I appreciate that part of what will make this new array practical is that new interpolation algorithms will have to be devised for it, and some of that work is still in the future. But knowing what you know, do you anticipate that the likely problems or advantages will make the new array best suited for certain applications as opposed to others?

John Hamilton: The new filter patterns were designed with low-light conditions in mind, but it's too soon to say where they work best. Under well-controlled lighting conditions, such as in a studio, I would expect the new filters and a Bayer filter to be roughly equivalent.

T.O.P.: One last question—so how come the new array isn't named after its inventors, like the Bayer Array was named after its inventor, Kodak's Dr. Bryce Bayer, in 1976?

John Hamilton: We are just the tip of the iceberg. Many sensor and algorithm people are involved in bringing this technology forward.

As I understand, you can't patent just an idea. It has to be accompanied by a practical implementation. The idea of such a CFA as in this Kodak design must have been kicking around for years. Dr Bryce Bayer must have been aware of this option, but decided against it, probably because other areas of technology were not sufficiently developed to make it work.

The fundamental principle here is that 2/3rds of the light impinging upon current Bayer type sensors is essentially wasted. There has to be a better way.

Technology is all about increasing the efficiency with which we use our resources, whether it's photons or oil.
Title: Kodak's new sensor
Post by: jani on June 25, 2007, 06:12:29 pm
Quote
The comparison of quantities of colored pixels in both designs is 16.6% red, blue and green for CFAv2 versus 25% red, blue and green for Bayer. (I've discounted the extra 25% of green because I believe this is for luminance purposes and I'm not sure how that contributes to over-all color accuracy).
As I understood it, the larger proportion of green comes from how the average human eye functions; green is simply more important.

Technically speaking, the proportion of blue should probably be lower than 25%.

Quote
If we now compare say a 20mp upgrade to the 400D, employing the new CFA, with the existing 10mp 400D, we could expect higher resolution from the 20MP camera without compromising dynamic range or high ISO performance. Agreed?
I'm not sure I can agree to that, because we don't yet know how this works, cf. what you mention about the problem of base ISO. But for the sake of the thought experiment, sure.

Quote
Lets compare color accuracy. The 10MP 400D has 2.5m red, blue and green pixels (plus 2.5m additional green for luminance purposes).

The new 20mp 400D has 3.3m red, blue and green pixels (plus 10m for luminance).

Comparing the final images, one has 2.5m items of red data and the other has 3.3m items of red data. Which is more accurate? Is color accuracy even going to be an issue with such high pixel density?
Colour accuracy is always an issue.

Quote
I know you could argue that a 20MP Bayer sensor would have 5m items of red data and that 5m is better than 3.3m, but that argument discounts the role of the new algorithms for the CFAv2.
I would argue that Kodak say that CFAv2 vs. Bayer is roughly at parity with normal images, but that it's unclear what pixel peepers would see.

Again, it's a bit like Foveon vs. Bayer.

Quote
The other issue which I think deserves more investigation is, "Just how much color information does the human eye require in order to get a realistic sense of accurate color in a scene?"
That clearly depends on how closely you inspect the image in question, as well as on interference effects.

Quote
I'll mention just two observations which make me think it is far less than you suppose.
That assumes that you know how much I suppose is necessary, but it also requires that you answer the question: "necessary for what?"

I agree that it's possible to compress information very well with a minimal loss of visual impact.

The evidence for that lies not only in JPEG vs. TIFF-RGB, but also in GIF.

However, this does not necessarily stand up to scrutiny in all cases.

Again, assuming that Kodak are right in their claims, this new pattern will -- with assumed future improvements in demosaicing algorithms -- be at parity with the Bayer pattern under well-lit conditions.

So how, exactly, is this "far less than I suppose"?
Title: Kodak's new sensor
Post by: Ray on June 25, 2007, 07:33:56 pm
Quote
Again, assuming that Kodak are right in their claims, this new pattern will -- with assumed future improvements in demosaicing algorithms -- be at parity with the Bayer pattern under well-lit conditions.

So how, exactly, is this "far less than I suppose"?
[a href=\"index.php?act=findpost&pid=124851\"][{POST_SNAPBACK}][/a]

I was addressing your view that color accuracy would suffer as a result of a 'far' smaller proportion of the total pixels on the sensor providing color information.

Clearly the success of this design depends to a large degree on this factor. For me, I don't require accurate color; only believable color, pleasing color and controllable color.

What I can achieve scanning faded color slides and negatives makes me an optimist on this issue   .
Title: Kodak's new sensor
Post by: Ray on June 25, 2007, 11:17:29 pm
However, I admit there's some funny math going on here which is causing me to make some basic mistakes in my own reasoning. If half the pixels receive 3x as much light because they are panchromatic, and the other half with a color filter have the same quantum efficiency and a base ISO of 100, then the panchromatic pixels effectively have a base ISO of 300, not 800 as I previously suggested.

If half the pixels get 3x as much light, then the average increase for the whole sensor is just 0.75x the amount of light, which equates to less than a 1 stop improvement.

Since Kodak are claiming a 1-2 stop improvement, there must be something else going on which they are not mentioning. Perhaps they are combining the new CFA with an improved sensor having greater quantum efficiency.

Or perhaps they have a design which selectively applies more analogue preamplification to the signal from the color pixels, prior to digitisation.

On the other hand, 3x the signal should produce less than 3x the photonic shot noise and less than 3x the read noise, so that might explain the 1-2 stop improvement.
Title: Kodak's new sensor
Post by: jani on June 26, 2007, 06:54:13 pm
Quote
Since Kodak are claiming a 1-2 stop improvement, there must be something else going on which they are not mentioning. Perhaps they are combining the new CFA with an improved sensor having greater quantum efficiency.
This is also mentioned in the blog you first linked to:

Quote
JH: Clearly the color filter pattern and the software interpolation are different with this approach. What's more, the arrangement of the photoreceptors can be changed, but that's not a requirement.

So the answer is, apparently, that improved quantum efficiency in the sensor isn't a requirement.

Besides, improved quantum efficiency would also benefit Bayer or Foveon-alike arrays.
Title: Kodak's new sensor
Post by: Ray on June 26, 2007, 07:58:07 pm
Quote
This is also mentioned in the blog you first linked to:
So the answer is, apparently, that improved quantum efficiency in the sensor isn't a requirement.

Besides, improved quantum efficiency would also benefit Bayer or Foveon-alike arrays.
[a href=\"index.php?act=findpost&pid=125047\"][{POST_SNAPBACK}][/a]

That's true. Improved quantum efficiency would always be welcome but that is not necessarily a feature of this new design.

Nevertheless, I can see no point in having a bunch of color pixels that never reach full well capacity at base ISO, so either those color pixels should be smaller than the panchromatic pixels or the voltage generated by the color pixels should be subject to more analogue gain prior to A/D conversion.

Improved quantum efficiency will of course benefit all designs but has nothing to do with the increased sensitivity of the panchromatic pixels which is due entirely to the removal of a color filter allowing more photons to impinge upon the photoreceptor with any given exposure.

Your concern about color accuracy taking a backward step in this new design is a valid concern and I guess that is the major technological hurdle to be overcome here.
Title: Kodak's new sensor
Post by: jani on June 26, 2007, 09:03:11 pm
Quote
Nevertheless, I can see no point in having a bunch of color pixels that never reach full well capacity at base ISO, so either those color pixels should be smaller than the panchromatic pixels or the voltage generated by the color pixels should be subject to more analogue gain prior to A/D conversion.
Or perhaps they should be larger, in order to capture more light.

This is pretty interesting! Perhaps innovations in sensor design will make it possible to have differently sized sensor wells for different colours, in order to match normalized human colour perception better.

I'd also like to see what can be done about the strict matrix form of sensors; how about a honeycomb design instead?
Title: Kodak's new sensor
Post by: Ray on June 26, 2007, 10:15:47 pm
Quote
Or perhaps they should be larger, in order to capture more light.

[a href=\"index.php?act=findpost&pid=125062\"][{POST_SNAPBACK}][/a]

Come to think of it, whether they are larger or smaller will not fix the discrepancy in sensitivity between the panchromatic and color pixels, assuming the light gathering potential is proportional to the area. Big or small, the color pixel will recieve fewer photons per unit area of sensor.

But what might make sense is to have the same pixel spacing, ie. same pixel pitch, same size microlenses for both types of pixels, but the color photodiode could be smaller because it will always receive a smaller amount of light compared with the panchromatic pixels.

This would allow more room under each color filter for additional processors on the CMOS sensor; better analog pre-amplifiers etc. to help compensate for the lower sensitivity of the color photoreceptors and thus ensure adequate color accuracy.

Voila! I've just solved the design problem   .
Title: Kodak's new sensor
Post by: BJL on June 27, 2007, 07:24:55 am
Quote
... Big or small, the color pixel will recieve fewer photons per unit area of sensor.

But what might make sense is to have the same pixel spacing, ie. same pixel pitch, same size microlenses for both types of pixels, but the color photodiode could be smaller because it will always receive a smaller amount of light compared with the panchromatic pixels.
[a href=\"index.php?act=findpost&pid=125073\"][{POST_SNAPBACK}][/a]
For optima DR, wouldn't it be better to increase the highlight headroom of the "white" pixels, with something like Fuji's SR technology.

I am wondering about other patterns since kodak says others are possible. Like
WR
BG
for maximal colour information, or at least
WGWG
BWRW
WGWG
RWBW
which is half white, the other half Bayer pattern rotated 45 degrees (to go on the "black" squares of an imaginary chess board), so decreasing the density of each colour of photosite by 1.4 (sqrt(2)).
Title: Kodak's new sensor
Post by: John Sheehy on June 27, 2007, 08:25:52 am
Quote
For optima DR, wouldn't it be better to increase the highlight headroom of the "white" pixels, with something like Fuji's SR technology.

The problem Kodak claims to be addressing is low sensitivity, not a lack of highlight headroom.

Quote
I am wondering about other patterns since kodak says others are possible. Like
WR
BG
for maximal colour information, or at least
WGWG
BWRW
WGWG
RWBW
which is half white, the other half Bayer pattern rotated 45 degrees (to go on the "black" squares of an imaginary chess board), so decreasing the density of each colour of photosite by 1.4 (sqrt(2)).
[a href=\"index.php?act=findpost&pid=125123\"][{POST_SNAPBACK}][/a]

The pattern in the news blips looks like it is optimized for binning, as you can reduce 2x2 tiles into single pixels with one pan value and one colored value, with no translation error, as the new pixels are centered on the center of the original pan and colored pairs.
Title: Kodak's new sensor
Post by: Ray on June 27, 2007, 10:37:19 am
Quote
For optima DR, wouldn't it be better to increase the highlight headroom of the "white" pixels, with something like Fuji's SR technology.

BJL,
I tend to agree with John here. By employing the Fuji concept you'd be taking one step forward by increasing sensitivity of the panchromatic pixels and one step backwards by reducing the sensitivity of the 'highlight' pixel under the same microlens. You'd be back to square one. Any exposure sufficient to fill the well of the photodiode under a color filter, would overfill the well of the normal panchromatic pixel.

Some of these new patterns do look as though they are designed for binning.

[attachment=2703:attachment]

In Pattern A, for example, there are panchromatic pixels that are not adjacent to any red pixel at any side or corner. Without binning one would suppose that color interpolation would be rather inaccurate with such a pattern.
Title: Kodak's new sensor
Post by: jani on June 28, 2007, 06:58:19 am
Quote
Come to think of it, whether they are larger or smaller will not fix the discrepancy in sensitivity between the panchromatic and color pixels, assuming the light gathering potential is proportional to the area. Big or small, the color pixel will recieve fewer photons per unit area of sensor.
While it was only a half-arsed idea, the idea was that larger sensor sites for the filtered colours would mean more light per colour site, hence reducing the discrepancy.
Title: Kodak's new sensor
Post by: BJL on June 28, 2007, 08:56:17 am
To John: Kodak's current stated goal is sensitivity (for small digicam sensors probably), but surely it would not hurt in the future to improve both sensitivity and dynamic range with this smart matching of the sensor's luminosity/color resolution characteristics more closely to that of our eyes? Not to mention trying to better match the DR of our eyes?

To Ray:
Quote
By employing the Fuji concept you'd be taking one step forward by increasing sensitivity of the panchromatic pixels and one step backwards by reducing the sensitivity of the 'highlight' pixel under the same microlens ... Any exposure sufficient to fill the well of the photodiode under a color filter, would overfill the well of the normal panchromatic pixel.
[a href=\"index.php?act=findpost&pid=125163\"][{POST_SNAPBACK}][/a]
I do no see why using the two photodiode "SR" type photosites would cause problems with the highlight photodiode sensitivity, or with performance comapared to conventional single photodiode photosites. Those highlight "S" photodiodes provide information that is only needed at very well lit photosites (ones where the main "R" photodiode is blown out), so they can easily have enough sensitivity and high S/N ratio while being very small, getting a small fraction of all light reaching the photosite, and interfering very little with performance of the main photodiode.
Title: Kodak's new sensor
Post by: Ray on June 28, 2007, 10:53:21 am
Quote
While it was only a half-arsed idea, the idea was that larger sensor sites for the filtered colours would mean more light per colour site, hence reducing the discrepancy.
[a href=\"index.php?act=findpost&pid=125376\"][{POST_SNAPBACK}][/a]


More light for the colored pixels means disproportionately less light for the panchromatic pixels. Put simply, but not necessarily totally accurately I know, on average only one out of every 3 photons that arrive at the color filter get through. The other 2 are filtered out. The greater the area covered by panchromatic pixels, the better the low light performance of the sensor. The greater the area covered by monochrome pixels, the worse the low light performance.
Title: Kodak's new sensor
Post by: Ray on June 28, 2007, 11:35:17 am
Quote
I do no see why using the two photodiode "SR" type photosites would cause problems with the highlight photodiode sensitivity, or with performance comapared to conventional single photodiode photosites. Those highlight "S" photodiodes provide information that is only needed at very well lit photosites (ones where the main "R" photodiode is blown out), so they can easily have enough sensitivity and high S/N ratio while being very small, getting a small fraction of all light reaching the photosite, and interfering very little with performance of the main photodiode.
[a href=\"index.php?act=findpost&pid=125398\"][{POST_SNAPBACK}][/a]

BJL,
I don't see any reason why your suggestion of an SR type design for the panchromatic pixels could not be an alternative design providing improved dynamic range at base ISO. Perhaps one step forward and one step backward is an exaggeration. Shall we say, 2 steps forward and one back.

The design goal of the new Kodak sensor (as I understand it) is to provide 1 to 2 stops less noise at high ISO by allowing the sensor to collect more photons with the same exposure. The implication is that base ISO in such a sensor would be more like ISO 300 than ISO 100 because 50% of the area of the sensor is processing 3x the number of photons, with the same exposure (on average).

If you were to place a normal, sensitive panchromatic pixel next to a smaller insensitive pixel, both under the same microlens, the photons directed at the less sensitive pixel would be unproductive at high ISO. Read noise and shot noise would overwhelm the signal, but presumably the signal would still be read and the noise added to the combined signal from both pixels at that site.

Furthermore, because the standard panchromatic pixels, for any given exposure at high ISO, will receive fewer photons because some have been directed to the unproductive highlight pixel, the standard pixels will also have more noise, before the two signals are combined.

Just applying a bit of logical reasoning   .
Title: Kodak's new sensor
Post by: BJL on June 28, 2007, 06:03:55 pm
Ray, perhaps you misunderstand my proposal, which is using an SR style sensor with the new Kodak "partial colour filter array" over it. Mostly for the benefit of the "panchromatic" pixels, to effectively bring their base ISO back down to about ISO 100 despite their greater illumination. In fact, maybe at colour filtered pixels, the SR effect is unneeded and the output of the two photodiodes could be binned.

As to the following claim, I though I had already refuted it in my previous post:
Quote
Furthermore, because the standard panchromatic pixels, for any given exposure at high ISO, will receive fewer photons because some have been directed to the unproductive highlight pixel, the standard pixels will also have more noise, before the two signals are combined.
[a href=\"index.php?act=findpost&pid=125430\"][{POST_SNAPBACK}][/a]
But as I indicate above, both theoretically and experiment (high ISO performance of Fuji's SR sensors) indicate that this effect is probably not be significant, even if it occurs to a small degree. The point is that only a very small fraction of light needs to be sent to the "highlight" pixels, since they only need to work well with high illumination levels, so that the percentage of light lost to the main "shadow" pixels can be very small.
Title: Kodak's new sensor
Post by: EricV on June 28, 2007, 08:48:07 pm
Quote
Nevertheless, I can see no point in having a bunch of color pixels that never reach full well capacity at base ISO ....
The lower sensitivity of the color pixels could be used to extend the dynamic range of the sensor.  In a very high contrast scene, "expose to the right" could have a new meaning -- expose so that the panchromatic pixels are saturated, but the color pixels are not.  The color pixels would then reach nearly full well capacity.  The interpolation algorithm in the raw converter could work around the saturated panchromatic pixels in the highlights (with some loss of resolution).  Shadow noise would benefit from the increased exposure.
Title: Kodak's new sensor
Post by: Ray on June 28, 2007, 10:24:09 pm
Quote
But as I indicate above, both theoretically and experiment (high ISO performance of Fuji's SR sensors) indicate that this effect is probably not be significant, even if it occurs to a small degree. The point is that only a very small fraction of light needs to be sent to the "highlight" pixels, since they only need to work well with high illumination levels, so that the percentage of light lost to the main "shadow" pixels can be very small.
[a href=\"index.php?act=findpost&pid=125488\"][{POST_SNAPBACK}][/a]

BJL,
I have no precise information on how the Fujifilm SR system works in practice. I'm aware only of the principle from schematic diagrams on sites such as dpreview. I recall reading some of the comments on Fuji's first P&S camera that employed this new system. The improvements in DR as I recall were very marginal and I lost interest. I haven't been following the progress of the implementation of this design in subsequent models.

I got the impression you were promoting this SR type arrangement as a way of equallising the sensitivities of the panchromatic and monochromatic pixels so that full 'exposure to the right' (in respect of the monochromatic pixels) could be achieved without blowing out the panchromatic pixels.

Have you not underestimated that increase in sensitivity of the panchromatic pixels? They are three times more sensitive. Your idea of directing only a very small fraction of light to the highlight pixels will not pass muster. We're still stuck with an inequality of sensitivities.
Title: Kodak's new sensor
Post by: AJSJones on June 28, 2007, 10:35:51 pm
Quote
[attachment=2703:attachment]

In Pattern A, for example, there are panchromatic pixels that are not adjacent to any red pixel at any side or corner. Without binning one would suppose that color interpolation would be rather inaccurate with such a pattern.
[a href=\"index.php?act=findpost&pid=125163\"][{POST_SNAPBACK}][/a]

Ray, I think there's a lot of opportunity for development going on in the new algorithms for decoding such an array.  Top leftish in that image is a pan pixel surrounded by 2G and 2B (of the type you are concerned about)  The pan pixel's G and B are therefore probably "well guessed" by interpolation, right?  And the pan (Y) is the sum of R, G and B (in some way known to the array engineers) so the R value can be deduced quite well. Same for the pixels with no blue touching them, they're well informed by the 2R and 2G so the B value is "well-deduced"?  A bit like video encoding at 4:2:2 or 4:1:1.  The maths is way beyond me, but the idea that there's not as much loss of colour resolution as meets the eye, so to speak, is reasonable

Andy
Title: Kodak's new sensor
Post by: Ray on June 28, 2007, 11:47:25 pm
Andy,
I always had the impression that color information did not need to be as accurate as luminance information for acceptable or pleasing results. I mean, it's possible nowadays to recreate color movies from B&W movies, and restoring faded color when scanning a slide is not difficult.

The only problem I foresee that might occur is when people want pixel accuracy with regard to color. If the y pixel we're referring to, not in contact with any adjacent red pixel value, were in fact a small speck on a textured surface that was as small as, or even smaller than the pixel pitch, such a tiny speck could be either red, green or blue and there would be no way of determining which.
Title: Kodak's new sensor
Post by: AJSJones on June 29, 2007, 12:24:09 am
Quote
Andy,
<snip>
The only problem I foresee that might occur is when people want pixel accuracy with regard to color. If the y pixel we're referring to, not in contact with any adjacent red pixel value, were in fact a small speck on a textured surface that was as small as, or even smaller than the pixel pitch, such a tiny speck could be either red, green or blue and there would be no way of determining which.
[a href=\"index.php?act=findpost&pid=125539\"][{POST_SNAPBACK}][/a]

That's as true a statement for that scenario as in the Bayer one where the speck is assigned a color based solely on which pixel it falls on and not which color it is in real life. You get an RG or B value for it but no way of knowing what the other two colour values are, so it's no more or less info.   For that super-high res scenario, the only way to get it right (color detail smaller than pixel pitch) is the Foveon one - no CFA can succeed.  However, this failure is not likely to be noticeable for your scenario the vast majority of the time  - does the AA filter restore some of the info in this case?
Title: Kodak's new sensor
Post by: Ray on June 29, 2007, 05:00:12 am
Quote
That's as true a statement for that scenario as in the Bayer one where the speck is assigned a color based solely on which pixel it falls on and not which color it is in real life. You get an RG or B value for it but no way of knowing what the other two colour values are, so it's no more or less info.   For that super-high res scenario, the only way to get it right (color detail smaller than pixel pitch) is the Foveon one - no CFA can succeed.  However, this failure is not likely to be noticeable for your scenario the vast majority of the time  - does the AA filter restore some of the info in this case?
[a href=\"index.php?act=findpost&pid=125543\"][{POST_SNAPBACK}][/a]

Andy,
You might be right. I'm afraid it's too complicated for me. I don't know how these algorithms do their job. If a red blob overlaps very slightly a green and blue pixel, one might think it would be easy to work out that the blob is red. But it's not as though we have a visual representation of red light overlapping green and blue light. The computer knows that the green and blue pixels are producing a voltage due to a certain intensity of green and blue light because there's a filter that blocks out the other two primaries. But there's no 'before and after' and there's no information about the true color of the panchromatic pixel other than the effect it has on neighbooring pixels, so I don't know how in this situation, in Pattern A, an algorithm could determine what frequency the light is that's illuminating the adjacent panchromatic pixel, except in a rather inaccurate way by analysing lots of surrounding pixels and making a rough guess.
Title: Kodak's new sensor
Post by: jani on June 29, 2007, 02:51:19 pm
Quote
But there's no 'before and after' and there's no information about the true color of the panchromatic pixel other than the effect it has on neighbooring pixels, so I don't know how in this situation, in Pattern A, an algorithm could determine what frequency the light is that's illuminating the adjacent panchromatic pixel, except in a rather inaccurate way by analysing lots of surrounding pixels and making a rough guess.
I guess an astrophysicist could tell you quite a bit about how interferometry (http://en.wikipedia.org/wiki/Interferometry#Astronomical_Interferometry) and aperture synthesis (http://en.wikipedia.org/wiki/Aperture_synthesis) are used to get details that seem "impossible" to get, by combining several radio or optical telescopes.

It seems obvious to me that interferometry and aperture syntesis can be and probably are used in sensor array interpolation today.
Title: Kodak's new sensor
Post by: Ray on June 29, 2007, 07:19:42 pm
Quote
I guess an astrophysicist could tell you quite a bit about how interferometry (http://en.wikipedia.org/wiki/Interferometry#Astronomical_Interferometry) and aperture synthesis (http://en.wikipedia.org/wiki/Aperture_synthesis) are used to get details that seem "impossible" to get, by combining several radio or optical telescopes.

It seems obvious to me that interferometry and aperture syntesis can be and probably are used in sensor array interpolation today.
[a href=\"index.php?act=findpost&pid=125630\"][{POST_SNAPBACK}][/a]


Jani,
An interferometer is an expensive piece of scientific equipment which analyses the interference caused by the interaction of different wavelengths of light which have been split by mirrors. You are not suggesting that the humble digital camera has become an interferometer, are you?

As far as I know, all the demosaicing and interpolation that takes place when converting a RAW image is done 'after the fact'. There's no real time analysis of incoming panchromatic signals. The real time analysis only takes place by virtue of the color filter. All pixels in the Bayer type array know what color they are. Panchromatic pixels haven't a clue, except from what their neighbours are doing. If one of their neighbours is too far away, as in 'Pattern A', it would seem likely to cause greater uncertainty and create more scope for color error.
Title: Kodak's new sensor
Post by: Steve Kerman on June 30, 2007, 11:44:11 pm
Quote
But there's no 'before and after' and there's no information about the true color of the panchromatic pixel other than the effect it has on neighbooring pixels, so I don't know how in this situation, in Pattern A, an algorithm could determine what frequency the light is that's illuminating the adjacent panchromatic pixel, except in a rather inaccurate way by analysing lots of surrounding pixels and making a rough guess.
[a href=\"index.php?act=findpost&pid=125567\"][{POST_SNAPBACK}][/a]

Remember that Kodak is designing this array to optimize the image quality in MPEG-encoded video.  The concerns about color detail are very different in a highly-compressed, moving video stream than they are for a still image that one intends to make 3x4 foot exhibition prints of.  The eye doesn't have time to process the small color details in a moving image, so it is acceptable to de-emphasize your color resolution in exchange for better low-light performance.  Also, I understand that this pattern is designed to encode efficiently under inverse-cosine encoding, which is completely irrelevant to "RAW files only, please" landscape photographers.

It is not entirely evident that this filter design would result in better landscape photos, because that is not at all what it is designed to do.
Title: Kodak's new sensor
Post by: Ray on July 01, 2007, 07:14:52 am
Quote
Remember that Kodak is designing this array to optimize the image quality in MPEG-encoded video.  The concerns about color detail are very different in a highly-compressed, moving video stream than they are for a still image that one intends to make 3x4 foot exhibition prints of.  The eye doesn't have time to process the small color details in a moving image, so it is acceptable to de-emphasize your color resolution in exchange for better low-light performance.  Also, I understand that this pattern is designed to encode efficiently under inverse-cosine encoding, which is completely irrelevant to "RAW files only, please" landscape photographers.

It is not entirely evident that this filter design would result in better landscape photos, because that is not at all what it is designed to do.
[a href=\"index.php?act=findpost&pid=125859\"][{POST_SNAPBACK}][/a]

Fair enough! But this has not been made clear in the press reports and interviews that I've read. One could expect the first implementation to be in video cameras and P&S cameras, but the general impression I get is that this is an optional replacement for the Bayer system for all cameras.

There's an analogy here in respect of the Fujifilm SR system. It was initially implemented in P&S cameras but is now a feature of Fujifilm's flagship model, the S5 pro. I'll be looking forward to a dpreview test of this camera.
Title: Kodak's new sensor
Post by: jani on July 01, 2007, 09:22:40 am
Quote
An interferometer is an expensive piece of scientific equipment which analyses the interference caused by the interaction of different wavelengths of light which have been split by mirrors. You are not suggesting that the humble digital camera has become an interferometer, are you?
Ray, an interferometer doesn't have to be more than the same telescope at different times of day.

Aperture synthesis interferometry doesn't require mirrors, but it does require mathematical calculations.
Title: Kodak's new sensor
Post by: Ray on July 01, 2007, 10:54:08 am
Quote
Ray, an interferometer doesn't have to be more than the same telescope at different times of day.

Aperture synthesis interferometry doesn't require mirrors, but it does require mathematical calculations.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=125917\")

Doesn't have to be more than the same telescope at different times of day??

How does that help a single shot with a still camera?

I just typed "Aperture Synthesis Interferometry' into Google and got the following.

Please tell me in what ways this relates to a single shot with a digital camera   .

[a href=\"http://www.atnf.csiro.au/people/mdahlem/pop/tech/synth.html]http://www.atnf.csiro.au/people/mdahlem/pop/tech/synth.html[/url]
Title: Kodak's new sensor
Post by: jani on July 01, 2007, 05:57:07 pm
Quote
Doesn't have to be more than the same telescope at different times of day??

How does that help a single shot with a still camera?

I just typed "Aperture Synthesis Interferometry' into Google and got the following.
Why didn't you just follow my original Wikipedia links?

Quote
Please tell me in what ways this relates to a single shot with a digital camera   .
The airy disks from diffraction effects may span more than single pixels. This means that you in effect have several pixels showing parts of the same picture. Interferometry may (theoretically) be used to recover some of the information.
Title: Kodak's new sensor
Post by: Ray on July 01, 2007, 08:35:04 pm
Quote
Why didn't you just follow my original Wikipedia links?

[a href=\"index.php?act=findpost&pid=125972\"][{POST_SNAPBACK}][/a]

I did and came across the following:

Quote
Aperture synthesis or synthesis imaging is a type of interferometry that mixes signals from a collection of telescopes to produce images having the same angular resolution as an instrument the size of the entire collection............
 
In order to produce a high quality image, a large number of different separations between different telescopes are required

Now, admittedly the article goes on to imply that the number of different sets of data required for this process can be reduced through use of powerful and computationally expensive algorithms, and it would be a reasonable assumption to make that developments in this area could be relevant to the demosaicing required for this new Kodak CFA, but aren't you in danger of demolishing your own argument here, Jani?    It was you who raised the objections initially about the feasibility of this new CFA because of sacrifices in color accuracy that would have to be made.

I merely make the observation that it looks as though Pattern A will be more problematic than the other 2 patterns because all 3 primaries do not have an edge or a corner in common with each panchromatic pixel.

I'm quite optimistic about this new sensor even though a supercomputer might be required to get the best results   .
Title: Kodak's new sensor
Post by: jani on July 02, 2007, 07:31:35 am
Quote
I did and came across the following:

...

Quote
Now, admittedly the article goes on to imply that the number of different sets of data required for this process can be reduced through use of powerful and computationally expensive algorithms, and it would be a reasonable assumption to make that developments in this area could be relevant to the demosaicing required for this new Kodak CFA, but aren't you in danger of demolishing your own argument here, Jani? 
Not as I see it, no. Perhaps you think I'm arguing something I'm not?

Quote
It was you who raised the objections initially about the feasibility of this new CFA because of sacrifices in color accuracy that would have to be made.
I haven't claimed it isn't feasible. Could you perhaps go back and re-read what I've written?

Quote
I merely make the observation that it looks as though Pattern A will be more problematic than the other 2 patterns because all 3 primaries do not have an edge or a corner in common with each panchromatic pixel.
And I was merely pointing you in the direction of techniques that are used to calculate parts of the "missing" information, because you claimed you couldn't see how it could be done without "making a rough guess".
Title: Kodak's new sensor
Post by: Ray on July 02, 2007, 08:22:23 am
Quote
And I was merely pointing you in the direction of techniques that are used to calculate parts of the "missing" information, because you claimed you couldn't see how it could be done without "making a rough guess".
[a href=\"index.php?act=findpost&pid=126041\"][{POST_SNAPBACK}][/a]

Well, that depends on how rough is a 'rough guess'. I suppose the fact that it takes almost twice the number of interpolated Bayer pixels to equal the resolution of non-interpolated Foveon type pixels would be a fair indication of the roughness of the guess.

I do not expect the new Kodak sensor to provide greater resolution (except in the shadows) or greater color accuracy than existing Bayer sensors.
Title: Kodak's new sensor
Post by: Graeme Nattress on July 02, 2007, 09:41:39 am
Foveon pixels are rarely non-interpolated though.... Luma is derived from summing the 3 recorded channels. I hesitate to call them RGB as they're pretty far from looking like any RGB we'd recognise. Then chroma is made from a spatial averaging noise reduction technique using surrounding pixels. You can see this clearly as the camera goes up the ISO range the chroma blurs out. If you take an image into Photoshop and go into LAB you can see the resolution of the chroma is not that of the luma. (Foveon white papers mention the noise reduction and seperation of luma and chroma but don't give full details, but enough to go on to, and through looking at images, to sort of see what's happening.)

That said, the apparant resolution of the Sigma cameras is more down to their lacking the necessary optical low pass filtering than the Foveon chip. You can distinctly see the nasty artifacts from luma aliassing in practically every in-focus Sigma photograph.  Indeed most of the resolution loss from using a Bayer pattern sensor is from the optical low pass filtering necessary to stop both luma and especially chroma aliassing. (Again, Foveon white papers do mention the necessity of an OLPF, but perhaps they never told Sigma, or, perhaps Sigma decided that lack of OLPF was a selling point where they can pretend it's extra resolution, whereas it's really just artifacts.)

So, the factor 2 is more complex than that..... Indeed, say you use that factor and have a 5mp (*3) of Foveon pixels looking like the resolution of 10mp OLPF filtered  Bayer pixels. Now if you count them as photodecectors, you've got 15mp of Foveon pixels to equal the resolution of 10mp OLPF filtered  Bayer pixels, which seems to me a rather inefficient way to do things.

Graeme
Title: Kodak's new sensor
Post by: Ray on July 02, 2007, 12:12:19 pm
Quote
Foveon pixels are rarely non-interpolated though.... Luma is derived from summing the 3 recorded channels. I hesitate to call them RGB as they're pretty far from looking like any RGB we'd recognise. Then chroma is made from a spatial averaging noise reduction technique using surrounding pixels. You can see this clearly as the camera goes up the ISO range the chroma blurs out. If you take an image into Photoshop and go into LAB you can see the resolution of the chroma is not that of the luma. (Foveon white papers mention the noise reduction and seperation of luma and chroma but don't give full details, but enough to go on to, and through looking at images, to sort of see what's happening.)

Interesting! I always assumed that luma is derived from summing the 3 channels, and I can understand that the silicon material that is sensitive to one narrow band of frequencies is not completely transparent to the other frequencies, so there is some unavoidable loss of light energy in the system, as there is in the Bayer system with its CFA. The fact that Foveon based cameras do not seem to do particularly well at high ISO would seem to indicate that this loss of light energy is more serious than it is in the Bayer systems, but I really don't know how true this is.

But it seems reasonable that there'd be some reconstruction going on to compensate for such loss.

Quote
That said, the apparant resolution of the Sigma cameras is more down to their lacking the necessary optical low pass filtering than the Foveon chip. You can distinctly see the nasty artifacts from luma aliassing in practically every in-focus Sigma photograph.  Indeed most of the resolution loss from using a Bayer pattern sensor is from the optical low pass filtering necessary to stop both luma and especially chroma aliassing. (Again, Foveon white papers do mention the necessity of an OLPF, but perhaps they never told Sigma, or, perhaps Sigma decided that lack of OLPF was a selling point where they can pretend it's extra resolution, whereas it's really just artifacts.)

This doesn't sound quite right to me. I've heard there can be false detail above the Nyquist limit. However, the Kodak 14n also lacked an AA filter and as a result provided marginally better resolution than the Canon 1Ds. According to your theory, and bearing in mind the 14n has 14mp compared to to the 1Ds 11mp, the 14n should have had about 1.5x the resolution of the 1Ds, which it doesn't.

Quote
So, the factor 2 is more complex than that..... Indeed, say you use that factor and have a 5mp (*3) of Foveon pixels looking like the resolution of 10mp OLPF filtered  Bayer pixels. Now if you count them as photodecectors, you've got 15mp of Foveon pixels to equal the resolution of 10mp OLPF filtered  Bayer pixels, which seems to me a rather inefficient way to do things.

On the other hand, stacking photodetectors on top of each other could be considered a more efficient physical arrangement which allows for larger photodetectors on the same size sensor. The inefficiency seems to me to be largely due to the lack of transparency of the silicon material in letting other frequencies pass through.
Title: Kodak's new sensor
Post by: Graeme Nattress on July 02, 2007, 12:25:36 pm
Quote
But it seems reasonable that there'd be some reconstruction going on to compensate for such loss.
This doesn't sound quite right to me. I've heard there can be false detail above the Nyquist limit. However, the Kodak 14n also lacked an AA filter and as a result provided marginally better resolution than the Canon 1Ds. According to your theory, and bearing in mind the 14n has 14mp compared to to the 1Ds 11mp, the 14n should have had about 1.5x the resolution of the 1Ds, which it doesn't.

Sure, there's false detail, but false detail is just that - false. To me it looks like fine grained noise, or, if on edges, jaggies, neither of which are desireable. One of the problems with aliassing as once it's in a system it's very hard to remove as it's practically impossible to know whether detail you see is real or not, and hence it's hard to remove the detail that's not while keeping the detail that is. That's why most cameras use an OLPF as a lesser of two evils. I think this is even more important on moving images than stills as on stills, theoretically, if you've got the patience, you can go in and pixel paint out some of the problems. I certainly can't be bothered to do that and prefer to use a camera with an OLPF.

Of course, there's more to resolution than OLPF or not. Not least the fill factor of the pixels, lens and bayer reconstruction algorithm used.

Indeed, stacking photon detectors on top of each other is rather clever and allows for larger pixels. However, silicon is not the best colour filter, and it's the extreme matrixing needed to transform the layers into RGB that limits the noise performance of the Foveon.
Title: Kodak's new sensor
Post by: John Sheehy on July 02, 2007, 06:29:50 pm
Quote
Sure, there's false detail, but false detail is just that - false. To me it looks like fine grained noise, or, if on edges, jaggies, neither of which are desireable.

What I find so obviously false about aliased imaging is that I can clearly see that the image is composed of pixels.  I don't want to see that an image was recorded in pixels when I look at it.

Quote
One of the problems with aliassing as once it's in a system it's very hard to remove as it's practically impossible to know whether detail you see is real or not, and hence it's hard to remove the detail that's not while keeping the detail that is. That's why most cameras use an OLPF as a lesser of two evils.

Finer pixel pitches are the solution - when you get fine enough, you don't need AA filters; everything the lens can do is there in all of its glory, without aliasing.  Reading out a 250MP sensor is not an easy chore, however, with current technology and storage mediums.  Hopefully, one day, we will have that convenience, and cameras can have settings to downsample the data as the user wishes, or even have automatic MTF detection systems, that examine the image before writing to the card, to see if there is anything worth recording at maximum resolution, and automatically picks the highest resolution for a downsample that the recording warrants, and only writes out the lower resolution to the storage medium, as a linear DNG or something similar.  There could even be zones with different resolutions, to save storage space on bokeh.

Quote
I think this is even more important on moving images than stills as on stills, theoretically, if you've got the patience, you can go in and pixel paint out some of the problems. I certainly can't be bothered to do that and prefer to use a camera with an OLPF.

Of course, there's more to resolution than OLPF or not. Not least the fill factor of the pixels, lens and bayer reconstruction algorithm used.

Indeed, stacking photon detectors on top of each other is rather clever and allows for larger pixels. However, silicon is not the best colour filter, and it's the extreme matrixing needed to transform the layers into RGB that limits the noise performance of the Foveon.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=126082\")

It is, however, an excellent way to do greyscale.  Foveon RAW data, treated as two channels, "red" and "cyan" ("blue" + "green") are fairly low in noise, as is their sum.  The biggest problems are between the "blue" and "green" layers, where color must be extrapolated, boosting any noise, and there are blotches of a complimentary effect, where the green channel is dark where the blue is light, and visa-versa, especially in shadow areas:

[a href=\"http://www.pbase.com/jps_photo/image/77239784]http://www.pbase.com/jps_photo/image/77239784[/url]
Title: Kodak's new sensor
Post by: Graeme Nattress on July 02, 2007, 07:54:02 pm
Thanks for the demonstration about blue / green noise on Foveon. I think that shows very clearly what is going on.

Agreed the best solution to aliassing is massive oversampling / fine enough pitch to allow the lens to be the limit to the resolution, not the sampled array.

Lossy RAW compression techniques are evolving and they will be able to handle the data rate admirably. However, as pixels get smaller, we do get noise / dynamic range issues, and they're tricky to deal with also. But smaller and better pixels are a good way to go, as long as we don't loose sight of all the image parameters, not just resolution.

Graeme
Title: Kodak's new sensor
Post by: John Sheehy on July 02, 2007, 09:54:47 pm
Quote
Thanks for the demonstration about blue / green noise on Foveon. I think that shows very clearly what is going on.

This is before any attempt to create RGB color; those are literally the blue and green RAW channels, one inverted and with an offset.

Quote
Agreed the best solution to aliassing is massive oversampling / fine enough pitch to allow the lens to be the limit to the resolution, not the sampled array.

Lossy RAW compression techniques are evolving and they will be able to handle the data rate admirably. However, as pixels get smaller, we do get noise / dynamic range issues, and they're tricky to deal with also.

I'm not so sure that it is much of a problem; current 2 micron pitches in compacts are yielding very high photon collection per unit of area, and lower read noises when adjusted for pixel size.  The real world of light is not nice, binned pixels.  It is infinite resolution and infinite shot noise, until out retinas or our cameras bin the photons.

Would a list of the analog striking points of individual photons within a focal plane rectangle be any noisier than six counts of photons for six huge bins?  You can create the the latter from the former, but not the former from the latter.

Quote
But smaller and better pixels are a good way to go, as long as we don't loose sight of all the image parameters, not just resolution.[a href=\"index.php?act=findpost&pid=126140\"][{POST_SNAPBACK}][/a]

The only dangers are losing some photons and gaining a small amount of shot noise, due to the need for space between photosites, and getting so much more read noise that the read noise energy per unit of sensor area increases, but that trend does not exist with current cameras; the area-based noise is actually lower with tiny-pixel cameras than with DSLRs of the same and even superior technology.

The area-based shot noise of a Panasonic FZ50 is 1/3 stop lower than a Canon 1Dmk2 at all common ISOs, and the area-based read noise is a stop lower at ISO 100.  The pixel-based read noise of the Panasonic is lower than the Nikon D2X at all ISOs, and the area-based read noise is 2 stops lower for the Panasonic!

The scary "tiny pixel" stories are not coming true.  The scary "tiny sensor" stories are, and are being mistaken for "tiny pixel" issues.
Title: Kodak's new sensor
Post by: Ray on July 02, 2007, 09:59:10 pm
The problem comparing a Foveon sensor of a given pixel count with an equivalent Bayer type sensor is due to the value judgement one places on the strengths and weakness of the 2 systems. According to Popular Photography, a 10mp Bayer sensor will deliver higher resolution in B&W that the 4.7mp SD-14, but the SD14 will deliver a higher resolution in Red&Blue.

In other color combinations the 10mp Bayer system is as good as and sometimes better than the Foveon. Making a weighted assessment one would could justifiably say the SD14 is closer, on balance, to an 8mp Bayer type camera.

Below is an interesting chart I found comparing the sensitivities of the 3 channels in the Fovoveon sensor with the cone sensitiviy of the eye.

[attachment=2729:attachment]
Title: Kodak's new sensor
Post by: Ray on July 03, 2007, 08:52:34 pm
Quote
The area-based shot noise of a Panasonic FZ50 is 1/3 stop lower than a Canon 1Dmk2 at all common ISOs, and the area-based read noise is a stop lower at ISO 100.  The pixel-based read noise of the Panasonic is lower than the Nikon D2X at all ISOs, and the area-based read noise is 2 stops lower for the Panasonic!

The scary "tiny pixel" stories are not coming true.  The scary "tiny sensor" stories are, and are being mistaken for "tiny pixel" issues.
[a href=\"index.php?act=findpost&pid=126156\"][{POST_SNAPBACK}][/a]

John,
I'm surprised no-one has picked this up; the area based read noise of the P&S Panasonic FZ50 is 2 stops less than the Nikon D2X?

Such a statement implies that it would be possible using current technology to produce a 100mp APS-C sensor with noise performance at ISO 3200 at least as good as what's currently available in DSLRs, and resolution of course much higher with good lenses, and higher also due to the lack of a need for an AA filter.

I'm basing this calculation on the fact that a D2X sensor is approximately 10x the area of the FZ50's sensor.

The other implication is, if one were to compare shots of the same scene using the FZ50 and D2X and use the same physical aperture size in lenses of equivalent focal length, and use the same exposure so each pixel gets the same amount of light (which means something like f2.8 and ISO 100 for the FZ50, and f8 and ISO 800 for the D2X) then the FZ50 will produce cleaner images. Right?
Title: Kodak's new sensor
Post by: John Sheehy on July 04, 2007, 01:16:36 am
Quote
The problem comparing a Foveon sensor of a given pixel count with an equivalent Bayer type sensor is due to the value judgement one places on the strengths and weakness of the 2 systems. According to Popular Photography, a 10mp Bayer sensor will deliver higher resolution in B&W that the 4.7mp SD-14, but the SD14 will deliver a higher resolution in Red&Blue.

Then, what happens with green vs blue, or even closer with emerald vs turquoise?  The latter would be a disaster, I would think; the noise would be as strong as the real contrast.

There is also an issue of interpretation with B&W.  What is resolution?  Is it something that you can achieve sometimes and sometimes not, or is is something that requires that you "resolve" consistently, without any attachment to luck of alignment?  A Sigma camera can "resolve" as many lines as the sensor has rows or columns of pixels, but shift the registration just 0.5 pixels, and it sees nothing at all.  Vary from that resolution by a small amount, and you get patterns of alternating resolution and grey across the frame.  Is this what we really want to call "resolution"?  I think the Sigmas get to cheat on resolution tests, because the foundation for resolution tests is built upon film, where aliasing is impossible, and resolution depends on hints of contrast that is rolling off at a taper.  Aliased Sigma images seem to have greater resolution than they do, because of a loophole in the way resolution is measured.  Those sharp edges in aliased images are sharp, but they in the wrong place, and they're distortion, not resolution, IMO.

Quote
Below is an interesting chart I found comparing the sensitivities of the 3 channels in the Fovoveon sensor with the cone sensitiviy of the eye.
[attachment=2729:attachment]
[a href=\"index.php?act=findpost&pid=126157\"][{POST_SNAPBACK}][/a]

The bayer response is generally like the one for the eye, except that the green and red are as well-separated as the blue is from the green.  Compare the heights of the crossover points relative to the peaks in both systems.

The fact that the Foveon separates green from red better than the eye is not an overkill "plus" for the Foveon; the Foveon does not have the brain fabricating the illusion of noiseless color for it.

Also, the curves I see for foveon in your image look like the Foveon with their suggested filter for separating blue and green better, which Sigma does not use, as it increases cost and lowers overall sensitivity.
Title: Kodak's new sensor
Post by: Jonathan Wienke on July 04, 2007, 05:14:48 am
Quote
Kodak have not addressed what this does to the risk of blown highlights, so we don't really know if this will give a higher dynamic range.

Of course it will. All of the pixels in the silicon have the same sensitivity (within manufacturing tolerances) before the color filter array is attached. Afterward, the filtered pixels are getting about 1/3 of the light the unfiltered pixels get. So the unfiltered pixels will clip about 1 1/2 stops before the filtered pixels will, and the unfiltered pixels will record usable detail 1 1/2 stops deeper into the shadow areas than the filtered pixels.

Think of it as two separate sensors on the same chip (one color, one monochrome) with a 1 1/2-stop ISO difference.
Title: Kodak's new sensor
Post by: Ray on July 04, 2007, 09:06:08 am
Quote
Then, what happens with green vs blue, or even closer with emerald vs turquoise?  The latter would be a disaster, I would think; the noise would be as strong as the real contrast.

Don't know, John. The Pop Photography review tested concentric cricles on a color chart pairing the following colors; Green/White (win for the Nikon D80); Magenta/Black (win for the D80); Yellow/Blue (win for the SD14); Green/White (win for the D80); Cyan/Red (win for the SD14).

The D80 had better performance in 3 out of the 5 tests, but also shows better performance in B&W, so that's 4 out of 6 in favour of the D80.

Quote
But black-and-white test targets for measuring resolution don't show as much detail as Foveon's 14.1MP count implies. Analysis of the IT-10 black-and-white resolution target we use in the Pop Photo Lab finds the Sigma SD14 on par with a good 8-9MP camera (in RAW mode), but not in the same class as 10MP models such as the Nikon D80.

Did you think in my previous post I was claiming the SD14 showed superior B&W resolution??

It seems that many Sigma fans are of the opinion that Pop Photography's review of the SD14 is too harsh and is biased in favour of the D80. I don't have any axe to grind here but would make the point that for most photographic purposes that are not strictly scientific, a bit of false detail is not necessarily objectionable. If it enhances the general appearance, that's fine by me. Don't most of us here spend a lot of time manipulating images in Photoshop to get a particular effect? Getting the most objectively accurate effect is not always the goal.

If my landscape shot includes a building with a balcony and balustrade, it wouldn't necessarily matter to me if my SD14 gave the impression there were 25 vertical struts in the balustrade, when in fact there are only 20   .

Here's the link to the review.  http://www.popphoto.com/cameras/4276/foveo...o-the-test.html (http://www.popphoto.com/cameras/4276/foveon-x3-sensor-claims-put-to-the-test.html)
Title: Kodak's new sensor
Post by: jani on July 04, 2007, 09:53:50 am
Quote
Quote
Kodak have not addressed what this does to the risk of blown highlights, so we don't really know if this will give a higher dynamic range.
Of course it will. All of the pixels in the silicon have the same sensitivity (within manufacturing tolerances) before the color filter array is attached. Afterward, the filtered pixels are getting about 1/3 of the light the unfiltered pixels get. So the unfiltered pixels will clip about 1 1/2 stops before the filtered pixels will, and the unfiltered pixels will record usable detail 1 1/2 stops deeper into the shadow areas than the filtered pixels.
That much is clear, but that doesn't necessarily translate into a wider dynamic range.

If all sensor wells clip at the same light levels, the unfiltered ones will be limiting the usable highlight range by 1.5 stops.
Title: Kodak's new sensor
Post by: John Sheehy on July 04, 2007, 10:52:40 am
Quote
John,
I'm surprised no-one has picked this up; the area based read noise of the P&S Panasonic FZ50 is 2 stops less than the Nikon D2X?

That's because the common paradigm is bent on maligning small pixels because of the small sensors they are coincidently found in.

Yes, the total blackframe read noise at the pixel level with the FZ50 is 2.7 ADU at ISO 100, and roughly scales with ISO (except that 200 is slightly better than scaled, and 1600 and especially 800 slightly worse than scaled), and is almost 4 ADU with the D2X at ISO 100, and scales to about 60 at ISO 1600.  The pixel pitch ratio is 5.5:1.97, or 2.79:1.  The binned read noise is therefore 1/2.79 as much for the FZ50 at the D2X pixel spacing, for 2.7/2.79 or .97 ADU, compared to almost 4 for the D2X, which is considered a camera with good image quality as long as you don't need deep shadow areas at all ISOs, or any shadow areas at high ISOs.

I don't mention binning because I think it is good to trade off resolution for low pixel-level noise.  I mention it because it shows that at the very least, you can get this little noise.  It think it is far better, however, to filter/resample noise only at spatial frequencies above those of which you like to see detail (or it is available).

Quote
Such a statement implies that it would be possible using current technology to produce a 100mp APS-C sensor with noise performance at ISO 3200 at least as good as what's currently available in DSLRs, and resolution of course much higher with good lenses, and higher also due to the lack of a need for an AA filter.

I'm basing this calculation on the fact that a D2X sensor is approximately 10x the area of the FZ50's sensor.

The other implication is, if one were to compare shots of the same scene using the FZ50 and D2X and use the same physical aperture size in lenses of equivalent focal length, and use the same exposure so each pixel gets the same amount of light (which means something like f2.8 and ISO 100 for the FZ50, and f8 and ISO 800 for the D2X) then the FZ50 will produce cleaner images. Right?
[a href=\"index.php?act=findpost&pid=126309\"][{POST_SNAPBACK}][/a]

Yes, unless your definition of "clean" is based on pixel-level cleanliness.  Lots of people seem to be fixated on that.
Title: Kodak's new sensor
Post by: John Sheehy on July 04, 2007, 11:02:47 am
Quote
Did you think in my previous post I was claiming the SD14 showed superior B&W resolution??

I thought you suggested it in previous posts.  perhaps an unqualified use of the word "resolution".

Quote
I don't have any axe to grind here but would make the point that for most photographic purposes that are not strictly scientific, a bit of false detail is not necessarily objectionable. If it enhances the general appearance, that's fine by me.

The only time a little aliasing doesn't bother me is when I am looking at a tiny image, in which case I don't expect any image quality anyway, and the contrasty edges make my brain feel like it has achieved focus.  I would not want to have to make prints from an aliased image, though, as I can clearly see the pixel grid implied in the mislocated edges, and the emphasis on horizontal and vertical edges, especially ones that are known to be at slight angles, snapped to 0 degrees or 90 degrees.

Quote
Don't most of us here spend a lot of time manipulating images in Photoshop to get a particular effect? Getting the most objectively accurate effect is not always the goal.

I don't try to get contrasty edges in the wrong places; no.

Quote
If my landscape shot includes a building with a balcony and balustrade, it wouldn't necessarily matter to me if my SD14 gave the impression there were 25 vertical struts in the balustrade, when in fact there are only 20   .
[a href=\"index.php?act=findpost&pid=126432\"][{POST_SNAPBACK}][/a]

It's not just the count that will be wrong, though, the location of the edges of the struts will be distinct and wrong.  That bothers my brain.
Title: Kodak's new sensor
Post by: John Sheehy on July 04, 2007, 11:24:38 am
Quote
Of course it will. All of the pixels in the silicon have the same sensitivity (within manufacturing tolerances) before the color filter array is attached. Afterward, the filtered pixels are getting about 1/3 of the light the unfiltered pixels get. So the unfiltered pixels will clip about 1 1/2 stops before the filtered pixels will, and the unfiltered pixels will record usable detail 1 1/2 stops deeper into the shadow areas than the filtered pixels.

Think of it as two separate sensors on the same chip (one color, one monochrome) with a 1 1/2-stop ISO difference.
[a href=\"index.php?act=findpost&pid=126369\"][{POST_SNAPBACK}][/a]

I don't think that the green-filtered pixels are losing 2/3 of the light, it is probably more like half.  The green channel is the broadest, and the most sensitive.  I would guess that an unfiltered pixel would be about a stop more sensitive than green, 1.5 stops more than blue, and 2 stops more than red, to white light.  The total light collected in 16 pixels would be 8*1 + 4*0.5 +2*0.35 +2*0.25 = 11.2 units of light, with 1/2 the pixels unfiltered, and 8*0.5 +4*0.35 +4*0.25 = 6.4 units of light, with all the pixels filtered, slightly less than 1 stop less sensitive overall to white light.

The question is how you use this data.  The arrangement is clearly not 100% beneficial; if all the unfiltered pixels clip, what you're left with is only half of the quantum efficiency you would have had with all filtered pixels, so exposure has to be lower, increasing the noise in the filtered pixels, requiring lots of filtering of chroma noise.
Title: Kodak's new sensor
Post by: Ray on July 04, 2007, 10:41:37 pm
This is a very confused situation, John. I don't pretend I'm not confused by the implications of this mixture of panchromatic and monochromatic pixels.

Until we get to the stage where all the processors of the signal are on the reverse side of the chip, allowing the photoreceptor size to be approximately equal to the pixel pitch, there's going to be some trade-off between the area allocated for the photoreceptor and the area allocated for all the on-board processors.

In this particular design from Kodak, I think it would be better to reduce the size of the pixels under a color filter to make more room for elaborate processing of the weaker color signal.

In other words, accept that the the true IS0 of the camera relates to the sensitivity of the panchromatic pixels (which means that the pixels under a color filter will never reach full well if they are the same size), reduce the size of those pixels under a color filter, but maintain the size of the micro-lens which directs the light to the color photoreceptors. This will ensure that at full exposure to the right, both the panchromatic pixels and monochrome pixels will reach full well capacity.

Clearly, the dynamic range and noise characteristics of the color pixels will suffer under this arrangement, but there's more space on the chip for on-board processing; better analogue preamplifiers etc.

ps. The design of the microlenses covering the color pixels would also have to be different in order to direct the light to the smaller area of the photoreceptor.
Title: Kodak's new sensor
Post by: st326 on July 08, 2007, 04:30:49 pm
Quote
Having had further thoughts about this new color filter array, (and I suppose it's not so much a new sensor as a new way of filtering the light), I'm wondering if the claimed 1-2 stop increase in sensitivity is an exaggeration. All the patterns I've seen consist of just half of the number of pixels, in total, having the color filter removed, ie. becoming panchromatic.

If one calculates on the basis that each filter covering each pixel in the Bayer type array filters out 2/3rds of the light, then removing color filters from half of the sensors should cause only 1/3rd of the light to be blocked, and that represents a one stop improvement in sensitivity. So how can we get up to two stops improvement? Is Kodak referring to the variability of scene content or sensor design, or both? For example, with the current Bayer type sensor, a scene that is predominantly green will be less noisy at high ISO than a scene that is predominantly red and blue.

If we take an average of the 1-2 stop claim and call it a 1.5 stop improvement in noise, then, if we were to remove the 'color filter array' entirely, we would get, on average, a 3 stop improvement in noise. We would have an extremely low noise B&W digital camera.

It seems to be a fact of life with modern technological products, that one doesn't hear much about the deficiencies of a particular design untill someone discovers a better way of doing things, then, in order to sell the new product, the deficiencies of the old product come to the fore and are widely publicised.

The new Kodak CFA has brought to my attention the possibilities of truly B&W digital photography, which have of course always existed irrespective of this new Kodak sensor. I'm already salivating after applying some simple maths to the situation.

We all know that Foveon type sensors produce higher resolution than Bayer type sensors, pixel for pixel, defining a pixel as a group of one red, blue and green element. This is due to the loss of resolution in the demosaicing and interpolation that takes place with the Bayer type sensor, as well as the presence of an AA filter. Without quibbling, 3.4m Foveon pixels are roughly equivalent to 6m Bayer pixels. This represents a 1.76 increase in resolution, pixel for pixel. Jonathan Wienke claims a 1.5x increase in resolution. Let's compromise on a 1.6x increase.

Now I'm going to propose something that I'm not 100% certain about, but which I think might quite probably be true. A cheap camera like the 10mp Canon 400D could deliver B&W images that could exceed the quality of B&W images from the 1Ds2, if its color filter array and AA filter were removed. In other words, without the demosaicing and interpolation, 10m panchromatic pixels would at least equal, in terms of resolution and luminance, 16m color pixels converted to B&W.

Furthermore, after taking into consideration the 3 stop advantage in noise and sensitivy that would result from removing the CFA and AA filter, a 10mp 400D might well wipe the floor with the 1Ds2 (for B&W images only, of course).

Consider the options that are available with a B&W-only 400D. Not only would we have the resolution of a 1Ds2 color image converted to B&W, but we'd have a usable ISO 25600, as noise-free as ISO 1600 on the current 400D.

Let's re-arrange the possibilities. Instead of going for maximum performance at unheard of ISOs, we could use that advantage to increase pixel count whilst still maintaining the same S/N on a pixel per pixel basis, compared with color filtered pixels. In other words, if we accept the current level of noise at ISO 1600 with a CFA sensor as being reasonable and useful, we can make smaller photodiodes with the same signal-to-noise performance, if they are panchromatic.

How about a 40mp 400D, B&W only, with the same low noise at ISO 1600? Could this be the highest resolving digital camera ever (apart from scanning backs)? Higher resolving than the P45, and all for a cost of $1,000-2,000?
[a href=\"index.php?act=findpost&pid=124464\"][{POST_SNAPBACK}][/a]

Sorry for the late reply, but yes, if you lose the antialiasing filter and the Bayer matrix, you *really do* get all of the benefits mentioned above -- a lot less noise, and hugely increased sharpness. I've been using a Megavision E4 (16 megapixel) monochrome back for about a year now, which actually uses a Kodak chip, so this is actually nothing new for them, though they are sold (including by Megavision) mostly for miltary/industrial/scientific purposes rather than to the likes of me. The base ISO is 100, which gives about 11 bits of usable signal, maybe a bit more. Sharpness is typically lens limited, to the extent that Megavision don't recommend (cough) Hasselblad because the Zeiss lenses aren't quite up to it. I use a Bronica -- most of their newer lenses are fine, including the zooms, though I have swapped out some of my older  MC series lenses for the newer PE versions.

In terms of 'how sharp', it's difficult to describe without showing a print -- I personally think the results are only a hair off those of monochrome conversions from my 150 megapixel Better Light scan back, whist being far easier to achieve in practice.
Title: Kodak's new sensor
Post by: Ray on July 08, 2007, 09:12:35 pm
Quote
Sorry for the late reply, but yes, if you lose the antialiasing filter and the Bayer matrix, you *really do* get all of the benefits mentioned above -- a lot less noise, and hugely increased sharpness.

Interesting! Since I made that comment, whereby I was prepared to trade off the the lower noise capability of the panchromatic pixels for a higher pixel count on the assumption that smaller, more densly packed pixels would generate both more read noise and more shot noise per unit area of sensor, John Sheehy has weighed in with the observation that in general this is not true and that the reverse applies. Smaller, more densely packed pixels, as in P&S cameras such as the FZ50, tend to generate less noise per unit area of sensor.

On this basis, we could have a 40mp B&W APS-C DSLR, which would not only produce much higher resolution than any current Bayer type DSLR, and even better resolution than the latest MFDBs, but would also have the advantage of significantly less noise to boot, on same size prints.

Quote
In terms of 'how sharp', it's difficult to describe without showing a print -- I personally think the results are only a hair off those of monochrome conversions from my 150 megapixel Better Light scan back, whist being far easier to achieve in practice.

Perhaps you could show us 100% (or even 200%) small crops, at maximum jpeg quality, that are easily downloadable   .
Title: Kodak's new sensor
Post by: John Sheehy on July 08, 2007, 09:44:24 pm
Quote
This is a very confused situation, John. [a href=\"index.php?act=findpost&pid=126523\"][{POST_SNAPBACK}][/a]

Well, there are so many variations possible in using these arrays.  I get the impression, though, that Kodak is just thinking about a direct CFA replacement, with homogenous microlenses and photosites.  Any variation from that introduces a new set of compromises.

What I want to see is hi-tech microlenses that capture all of the light and somehow guides light into different pixels depending on wavelength; that would increase quantum efficiency without losing color resolution.
Title: Kodak's new sensor
Post by: John Sheehy on July 08, 2007, 09:53:40 pm
Quote
John Sheehy has weighed in with the observation that in general this is not true and that the reverse applies. Smaller, more densely packed pixels, as in P&S cameras such as the FZ50, tend to generate less noise per unit area of sensor.[a href=\"index.php?act=findpost&pid=127194\"][{POST_SNAPBACK}][/a]

Well, that applies mainly to read noise, as far as maximum potential is concerned.  Shot noise per unit of area should theoretically be slightly lower with big pixels, as the photosites and microlenses can cover a higher percentage of sensor area, but it seems that in current designs, big pixel/sensor cameras are a little sloppy with collecting maximum electrons, while small-pixel/sensor cameras try to capture every photon they can.
Title: Kodak's new sensor
Post by: st326 on July 10, 2007, 02:39:05 am
Quote
Interesting! Since I made that comment, whereby I was prepared to trade off the the lower noise capability of the panchromatic pixels for a higher pixel count on the assumption that smaller, more densly packed pixels would generate both more read noise and more shot noise per unit area of sensor, John Sheehy has weighed in with the observation that in general this is not true and that the reverse applies. Smaller, more densely packed pixels, as in P&S cameras such as the FZ50, tend to generate less noise per unit area of sensor.

On this basis, we could have a 40mp B&W APS-C DSLR, which would not only produce much higher resolution than any current Bayer type DSLR, and even better resolution than the latest MFDBs, but would also have the advantage of significantly less noise to boot, on same size prints.
Perhaps you could show us 100% (or even 200%) small crops, at maximum jpeg quality, that are easily downloadable   .
[a href=\"index.php?act=findpost&pid=127194\"][{POST_SNAPBACK}][/a]

I've just moved house so I'm not in a position to be able to do that immediately (everything is still in boxes), but hopefully I should be able to put something online within a week or so.

As best I can tell, the Megavision's chip is basically the same as the one in the 16 megapixel Hasselblad back (the one on the anniversary edition 500 series), just without the Bayer matrix and the AA filter. Megavision do a colour version of the E4 too, which uses exactly that chip. It's exactly the same 9 micron pitch, 37mm square 4k x 4k format in both cases. I suspect the improved noise performance is for exactly the obvious reasons -- if I put a deep red filter in front of the lens, I get a loss in sensitivity, so a Bayer matrix is basically doing exactly that (for R, G and  on chip. I quite like the Kodak idea, but I suspect that it will probably only work well for colour images or for panchromatic B&W conversions -- doing a 'red filter' conversion after-the-fact, for example, would probably look a bit weird, kind of like the usual Bayer 'not quite right' look at 1:1 but worse.

Actually, at 1:1, it's pretty tricky to get a capture that looks really sharp at the pixel level -- it can be done, but if you do *anything* wrong you can really see it. Forget hand-holding, or even using anything other than a really solid tripod. Not *at all* forgiving, more like using large format really, but worth it when it's right. One interesting difference is that the Megavision doesn't do any sharpening (obviously there's no interpolation either), so it's not really a like-for-like comparison with a typical colour RAW conversion. The images do sharpen very nicely, though, mostly I suspect because of the complete absence of interpolation artifacts.

Wearing a different hat (my PhD is in extreme environment electronics, so I've had to study the way things like CCDs behave), you get some interesting effects when you scale a semiconductor process. Generally, bigger transistors mean less noise, for the same reason that bigger transistors are more radiation hard (hence the use of lots of 386 processors on the space station). Nevertheless, as processes have scaled down, other optimisations have been made that have improved both power consumption and noise characteristics -- it's not true to say that reducing the size of a transistor improves noise (it does the opposite, without question), improving a semiconductor process (in various ways) can let you scale the features without affecting performance. If you scale them a bit less than you really need, this will give an improvement in performance, though it's not actually because of the scaling per se. Photo receptors on both CCD and CMOS sensors are honking great huge things in comparison with contemporary digital electronics -- making finer pitch sensors isn't really a problem at all, but the smaller the sensor, the fewer photons (and consequentially the fewer electrons), so the job of managing noise becomes a lot harder. I suspect that we're seeing some kind of Moore's law working on sensors, but with a slower growth curve due to the difficulty of managing the noise problem and also the much slower rate of advancement in lens sharpness. Actually, gut feeling is that sensors will hit a wall based on lens sharpness rather than anything else -- arguably this has happened already, but I think there is still some hope for some more improvement.
Title: Kodak's new sensor
Post by: BJL on July 14, 2007, 09:00:18 am
Quote
What I want to see is hi-tech microlenses that capture all of the light and somehow guides light into different pixels depending on wavelength; that would increase quantum efficiency without losing color resolution.
[{POST_SNAPBACK}][/a] (http://index.php?act=findpost&pid=127198\")
Meaning a trichroic prism over every photosite, a tiny version of the three way beam splitters used in 3 CCD cameras? That would be a nice trick if anyone can do it!

A reference: [a href=\"http://en.wikipedia.org/wiki/Dichroic_prism]http://en.wikipedia.org/wiki/Dichroic_prism[/url]

P.S. Perhaps it would already help to use a tiny standard prism at each photosite to partially segregate the light by color, and then have three long thin photodiodes at each site, like
RGB
RGB
RGB
in each site, with the prism splitting horizontally. After all, the current color distinction of CFA'a is far from a clean division into three wavelength groups, but instead has considerable overlap in color sensitivity curves.
Title: Kodak's new sensor
Post by: Ray on July 15, 2007, 08:48:38 am
Okay! Here's my recipe for the camera of the future   .

(1) Lenses made of artificial materials through nanotechnology; transparent materials with a negative refractive index which allow an image sharpness at f64 which is normally associated with f8.

(2) Microlenses designed as dichroic crystals which will precisely split the light into 3 components and precisely direct each component to a photodiode.

Such a camera would allow one to take noise-free shots in smokey nightclubs, without flash and with tremendous DoF. (And other situations of course. I'm not obsessed with nightclubs and strip joints.)  

You heard it first on LL. Get back to me in 10 years   .
Title: Kodak's new sensor
Post by: BJL on August 09, 2007, 10:19:57 am
Nikon has patented our idea on dichroic color splitters at each photosite:
http://patft.uspto.gov/netacgi/nph-Parser?...3&RS=PN/7138663 (http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7138663.PN.&OS=PN/7138663&RS=PN/7138663)
Title: Kodak's new sensor
Post by: Ray on August 09, 2007, 08:39:01 pm
Quote
Nikon has patented our idea on dichroic color splitters at each photosite:
http://patft.uspto.gov/netacgi/nph-Parser?...3&RS=PN/7138663 (http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=7138663.PN.&OS=PN/7138663&RS=PN/7138663)
[a href=\"index.php?act=findpost&pid=132315\"][{POST_SNAPBACK}][/a]
 

BJL,
Good detective work, eh?  

I get the impression from the following extract from that patent application, that Nikon's idea would not lend itself well to inexpensive, high pixel count sensors. Is that your reading?

Quote
The first method is a three-color separation dichroic prism (three-CCD) method. In the three-CCD method, incident light having been color-separated by the color separation unit, which includes three prisms, an air layer, and a plurality of dichroic filters (for example, a red reflection filter and a blue reflection filter), is applied to the three CCDs. Japanese Unexamined Patent Application Publication No. Hei 5-168023, for example, discloses the three-CCD method (refer to FIG. 2 of the patent document).

In the second method, that is, a single-CCD method, a color separation filter of primary color or additive complementary color is disposed on each light receiving surface of a CCD. Japanese Unexamined Patent Application Publication No. Hei 6-141327, for example, discloses the single-CCD method (refer to page 2 of the patent document).

The three-CCD color separation unit is large and expensive due to the complex structure of an optical system. The single-CCD color separation unit, on the other hand, has the advantage that it is simple, small, and inexpensive. Thus, a video camera, a digital still camera and the like generally use the single-CCD color separation unit.

However, the single-CCD color separation unit has the following problems.

First, the color separation filters disposed in front of the CCD decrease photon utilization efficiency. Therefore, the sensitivity of the CCD decreases.

Second, the different color (red, green or blue) filter is disposed in front of each light receiving surface of the CCD. The color separation filters are arranged in, for example, well-known Bayer Array. Accordingly, the red, green, and blue light receiving surfaces are spatially separate from one another, so that data outputted from each light receiving element corresponding to each light receiving surface has to be interpolated to actualize color. Therefore, there is a problem that false color, which does not exist in reality, appears.
Title: Kodak's new sensor
Post by: dilip on August 09, 2007, 10:30:02 pm
Quote
BJL,
Good detective work, eh?   

I get the impression from the following extract from that patent application, that Nikon's idea would not lend itself well to inexpensive, high pixel count sensors. Is that your reading?
[a href=\"index.php?act=findpost&pid=132428\"][{POST_SNAPBACK}][/a]

that paragraph is in the background to the patent and isn't talking about the inventive sensor.  Instead it's talking about the type of setup you get in the 3-CCD video cameras (prism splits light to 3 different sensors, each one set for R G or .  It's relatively rare that a patent will downplay the advance that it has made.

From a technical point of view, this is an interesting design, but my guess is that since it was first filed in Japan 5 years ago, they're still dealing with either fabrication issues.  The issues would likely be related to not getting it tuned to give substantially better quality at the same resolution, or it could be purely cost based.

Remember, without a fab line to call their own, or a history of semiconductor manufacturing, Nikon is at the mercy of third parties to implement the design.  The sad part is that a company like Sony would probably have no interest in manufacturing this sensor for Nikon unless they could use something similar in their own devices (assuming that it was a much better design). If Nikon isn't willing to share, and this scenario is true, it might take a while to see this thing come to market.

--dilip
Title: Kodak's new sensor
Post by: Ray on August 10, 2007, 02:57:23 am
Quote
that paragraph is in the background to the patent and isn't talking about the inventive sensor.  Instead it's talking about the type of setup you get in the 3-CCD video cameras (prism splits light to 3 different sensors, each one set for R G or .  It's relatively rare that a patent will downplay the advance that it has made.

You're right. It is. However, the new patent seems just as complicated as the dichroic prism/3 CCD method.

With one method we have a 3-color separation employing a dichroic prism and 3 CCDs for each pixel. With the new Nikon patent we have 3 dichroic mirrors and  3 'light receiving surfaces' under the one microlens.

I don't see how this Nikon patented complicated system will lend itself to high pixel count, moderately inexpensive sensor production.

We need some expert here to advise us of the benefits and trade-offs of what now appears to be 3 different systems; the 3 layered Foveon pixel, the dichroic prism & 3 CDDs and the 3 dichroic mirrors with 3 light receiving surfaces.
Title: Kodak's new sensor
Post by: dilip on August 10, 2007, 12:52:29 pm
Quote
You're right. It is. However, the new patent seems just as complicated as the dichroic prism/3 CCD method.

With one method we have a 3-color separation employing a dichroic prism and 3 CCDs for each pixel. With the new Nikon patent we have 3 dichroic mirrors and  3 'light receiving surfaces' under the one microlens.

I don't see how this Nikon patented complicated system will lend itself to high pixel count, moderately inexpensive sensor production.

We need some expert here to advise us of the benefits and trade-offs of what now appears to be 3 different systems; the 3 layered Foveon pixel, the dichroic prism & 3 CDDs and the 3 dichroic mirrors with 3 light receiving surfaces.
[a href=\"index.php?act=findpost&pid=132465\"][{POST_SNAPBACK}][/a]


I read the patent last night.  I don't think that they ever promised that it would result in high pixel counts and moderately inexpensive production.  All that they have stated is that this is novel. ( I don't have a copy in front of me, so I might be wrong).

Remember, they filed in 2003, so the patent is good to 2023.  This sensor may not see the light of day, but one of the future designs that someone comes up with might build off this.

--dilip
Title: Kodak's new sensor
Post by: Ray on August 10, 2007, 07:52:52 pm
Quote
I read the patent last night.  I don't think that they ever promised that it would result in high pixel counts and moderately inexpensive production.  All that they have stated is that this is novel. ( I don't have a copy in front of me, so I might be wrong).
[a href=\"index.php?act=findpost&pid=132520\"][{POST_SNAPBACK}][/a]

It makes very tedious reading and there seems to be a lot of repitition, no doubt for legal reasons, but there's an implication in the extract I quoted a couple of posts ago that the new invention by Nikon does not suffer from the disadvantages of the existing 3-CCD/prism method which, as they mention, is large, complex and expensive.

Both systems need 3 'light receiving surfaces' for each pixel, whether you call them photodiodes or CCDs, but one method uses dichroic prisms instead of dichroic mirrors. Both systems seem very complex to me and it's difficult to imagine how so much engineering could be fitted into a 5 micron wide space.

But as you imply, before the patent expires in 2023, nanotechnology might have progressed to the point where such an idea can be implemented economically on relatively small, high pixel count sensors.
Title: Kodak's new sensor
Post by: John Sheehy on August 10, 2007, 08:35:17 pm
Quote
But as you imply, before the patent expires in 2023, nanotechnology might have progressed to the point where such an idea can be implemented economically on relatively small, high pixel count sensors.
[a href=\"index.php?act=findpost&pid=132576\"][{POST_SNAPBACK}][/a]

And as quantum efficiency increases with methods like this, will people whine that cameras no longer have ISO 100?  That should be interesting.  Few people seem to understand that with capture depth remaining equal, lower base ISOs are not indicators of quality, but rather, indicators of inefficiency, and therefore higher noise at common ISOs.