Pages: [1] 2   Go Down

Author Topic: "Pixel Count And Future Imaging Chips" aticle  (Read 12549 times)

Ray

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« on: December 24, 2002, 09:41:14 am »

[font color=\'#000000\']Michael,
I must admit many of your assertions in this article don't fit in with my understanding. However, I'm quite likely to be one of those people who have unwittingly accepted a lot of misinformation from various sources on the net. The following issues are a bit puzzling for me.

(1) If the 1Ds is already close to capturing what Canon's best lenses are capable of, why the need of an anti-aliasing filter? I thought aliasing resulted from the sensor not having the resolving power to properly record high frequency information.

(2) The D60 has greater pixel density than the 1Ds and therefore greater resolving power. Is the D60 already beyond the limits of good lens resolution? A 1Ds with the same pixel density of the D60 would have a 15.5MP sensor.

(3) The Foveon 3MP sensor appears to have a similar resolving power to the 6MP D60. It doen't necessarily follow that a 10MP Foveon sensor would have the resolving power of a 20MP Bayer type sensor, but it seems a reasonable assumption that it might.

(4) There's been speculation on this forum about the number of pixels required to portray a line. Norman Koren, who's much more technically and mathematically literate than I am, has put it at three. That gives the D60 a maximum resolving power of about 45 lp/mm and the 1Ds somewhat less than that. Is this really close to the best that good 35mm lenses can deliver?

(5) A well known formula for determining system resolution is 1/S = 1/L + 1/F where L and F are the resolving power of the lens and film respectively. Anyone with just basic maths can see from this formula that to get the best out of either one of these factors, lens or film, one of them has to be much higher than the other. An 80 lp/mm Velvia film coupled with a lens also delivering 80 lp/mm gives a system resolution of 40 lp/mm. Doubling the resolution of the film still gives a system resolution of only 53 lp/mm, and tripling the resolution of the film to 320 lp/mm (which is what a high resolution technical B&W film in a high contrast situation might be capable of) gives a system resolution of only 64 lp/mm. We're still short of the 80 lp/mm the lens is capable of.

(6) There's a simple formula used by astronomers to calculate the maximum resolution of their telescopes that diffraction will allow at a given aperture. The figure varies according to the wavelength of a particular colour, but 1500/F stop = max. lp/mm is often quoted. As you can see, large apertures can theoretically produce astounding resolution. This is supported by aerial resolution tests where some lenses have been able to resolve in excess of 350 lp/mm. I don't know what the figures would be in typically low contrast situations, but there's a long way to fall before reaching the 45 lp/mm of the D60 (assuming Norman Koren is right).

Hope you can find the time to address some of these issues and put me right.

Cheers![/font]
Logged

bjanes

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #1 on: December 24, 2002, 04:10:57 pm »

[font color=\'#000000\']Here's a copy of what I posted on rec.photo.digital:


Michael Reichman recently posted an essay on his web site where he stated that megapixels beyond what the Canon 1Ds currently has (11M) would not improve picture quality since the sensor in that camera out resolves the excellent Canon L lenses.

I find this hard to believe, but I don't have an EOS 1Ds camera or the lenses. Theoretical considerations and reference to other web sites

http://www.normankoren.com/Tutorials/MTF7.html

http://www.clarkvision.com/imagedetail/sca...tml#digicamres1

cause me to doubt his claims. Michael is a very accomplished
photographer, but he tends to overstate the advantages of digital. When the Canon D30 first came out he claimed it was better than 35mm with fine grain slide film (with qualifications in fine print).

The Canon sensor has 11M pixels in the 24 * 36 mm frame, or a pixel spacing of 115 pixels per mm. Its Nyquist frequency is 58 line pairs per mm (it takes two pixels to resolve a line pair). That theoretical resolution is reduced by interpolation, blooming between pixels and the high pass (anti-aliasing) filter used on the camera. The Canon L series lenses can do better than this.

Norman Koren, in his excellent essay on the above web site, does calculations for the excellent Canon 28-70 f2.8 L zoom using published MTF figures. He calculates that the 1Ds has resolves 47.5 and 66 line pairs/mm respectively at MTF of 50% and 10% (perceived sharpness correlates best with resolution at 50%). The same lens with a 24M Bayer
pattern chip would resolve 65 and 93 lp/mm respectively. Mr Koren suggests that this may be the optimum for such a chip when one trades off resolution vs noise. Dr. Clark comes to similar conclusions on his web site.[/font]
Logged

Ray

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #2 on: December 25, 2002, 08:35:30 pm »

Quote
[font color=\'#000000\']The problem is the opposite of what you state. The reason that the AA filter is needed to to reduce the resolution and cut off the high frequencies. Thus the 1Ds, for example, actually is capable of recording higher resolution that it does because of its AA filter.[/font]
[font color=\'#000000\']Michael,
I can't understand this line of reasoning in the above quote. I accept that ant-aliasing filters reduce the resolution of the sensor. They work by blurring the image slightly. But to state that the cause of the aliasing is due to the sensor having too high a resolution is at complete odds with my understanding of the concept of aliasing. But maybe I've misunderstood the situation. It wouldn't be the first time.

As I understand it, aliasing is an artefact caused by an INADEQUATE sample frequency. In other words, aliasing occurs where there are NOT ENOUGH pixels to capture all the detail in the picture. Very simply, the problem can be tackled in two ways. (1) Blur the image so the finest detail the lens is capable of is always within the spatial sampling frequency of the sensor (what a waste of lens resolution!!)
(2) Increase the sampling frequency (ie. pixel density) so that the sensor 'out-resolves' the lens.

I would have thought the second way is preferable and this is the way that Kodak seems to be striving toward with its 14n. However, I would not be surprised if 14Mp proves to be inadequate to avoid aliasing artifacts, especially at larger apertures. I surmise it's always be possible to reduce and/or eliminate aliasing artefacts by stopping down.

Have I got this wrong?[/font]
Logged

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #3 on: December 26, 2002, 07:54:59 am »

[font color=\'#000000\']Ray,

I wrote a detailed reply last night, but the server was cranky and it got lost. So here's a quick summary.

Think of the analogy with digital audio. A sharp cut low pass filter is used at about 18 khz so that the audio signal doesn't "beat" or interfere with the sampling frequency.

Somewhat similarly with a digital imaging chip we want to avoid the sampling frequency (which is a function of the combined resolution of the imaging chip and lens system)  from oscilating with high frequency subject matter (in other words - find detail) in the image.

The AA filter is designed to reduce the resoltuion of the chip so that the chances of this are reduced. The problem is that we find such high frequencies covering a range, not at a specific spot. So, for example, we might get moire with a certain lens at a certain focal length, at a certain f stop and  at a certain subject distance. Change one of the variables and you won't see it.

Kodak is not including an AA filter in the DCS 14n to reduce cost and because they feel that due to the extreemly high resolution of the system users won't encounter it too often. Canon takes the appoach that it's worth the extra cost (depending on who you believe, $1,000 to $2,000). In any event we'll always see moire because at SOME frequencies there will be interference (beating).

Long way of saying, the resolution needs to be reduced on these cameras. Why? Because it's so high. Why don't we see this effect with film? Because the "sensors" in film (grain or dye clouds) are randomly distributed. The sensors in a chip are uniformly distributed, just like the uniform patterns in a woven fabric.

I've gone off on a tangent, but I hope this helps. Beyond this you'll need the services of an optical engineer. I'm just a photographer.

Michael[/font]
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
"Pixel Count And Future Imaging Chips" aticle
« Reply #4 on: December 27, 2002, 09:03:36 pm »

[font color=\'#000000\']
Quote
Kodak is not including an AA filter in the DCS 14n to reduce cost and because they feel that due to the extreemly high resolution of the system users won't encounter it too often.
Michael,
Exactly! Kodak feel that due to the extremely high resolution of the system (but more specifically, the high resolution of the sensor) users won't encounter moire and aliasing artifacts too often. That's my point.

As I see it, the lens itself can be considered as a low pass filter, in fact, a variable low pass filter. You can increase the efficacy (or threshold) of the filter by stopping down below F8 or F11. But, if you have to use an AA filter in addition to the 'natural' filtering process of the lens, then there HAS to be room for improvement of the sensor - ie. a higher resolving sensor.

The way you've described this situation, simple minded people like myself might get the impression that you're saying lower resolution sensors like the D30 and Contax N1 don't need AA filters and that the extremely high resolution sensors of many 3 and 4MP point-and-shoot cameras which have a sensor area about 1/16th of full frame 35mm, very high pixel density (equivalent to about 64MP if extrapolated to 35mm size) are in desperate need of strong AA filters, when if fact the reverse is the case. The photodetectors on these small chips are so small, typically less than 4 microns, that there's no need for an AA filter. The lens is the filter. That's how it has to be if you want to extract the maximum detail the lens can offer, and Paul has very eloquently described what a huge difference in fine detail is available when the AA filter is removed from his Kodak 660.

Samirkharusi has made an interesting point that increasing pixel density also increases noise and reduces dynamic range and that low noise and high dynamic range are preferable. I wouldn't disagree with that. Given a choice between a resolutiom improvement with just a few high quality lenses and a real improvement in noise and dynamic range with all lenses, if that's the choice, I would choose the latter. But I would say that technology advances on many different levels simulataneously and I don't see any reason to assume that we've already reached a fixed limitation like diffraction. The D30, D60 and 1Ds all have different pixel densities, yet as far as I've read, they have very similar noise and dynamic range characteristics, with the 1Ds having the edge, particularly with long exposure night shots.

My final point is that it might be true that both the D60 and 1Ds (despite the AA filter) are close to the resolution limit of certain Canon lenses. When I bought the Canon 100-400mm IS zoom, I also got a 1.4 teleconverter. Having compared a few shots taken with and without teleconverter at the 400mm end, at F8 and F11, I see no advantage in the teleconverter. In fact, if one wishes to put a fine point on it, I would say that the 400mm scene enlarged 1.4 times more than the 560mm scene, sometimes shows slightly more detail. It certainly has better contrast. Since any teleconverter is going to degrade the image to some degree, I would be prepared to admit, at this stage, that the resolving power of the D60's sensor is probably the equal of the long end of the 100-400 zoom. But the long end of the 100-400 has never been described as crash hot, has it?

During the next few days I'll try to fit in a few experiments using the 1.4 converter with different lenses. In fact, I can think of no better way to resolve this issue of how close current sensors are to the resolution limits of the lenses. I've always been sceptical of the benefits of teleconverters because the simple fact is, once the image information has passed through the lens, then that's it. No matter what the quality of the converter, it's not possible to create additional lines of resolution that weren't there in the original image (from an aerial perspective). The best a 2x converter can do is double the size of the image whilst halving the resolution. If the main lens is really high quality with resolution to spare, so to speak, then halving the resolution would at least ensure that ALL the information in the aerial image is recorded on the sensor.  If the image with converter, when enlarged on screen or print, shows more detail than the image without converter, enlarged to the same size, then that in my view would be proof that there's still need for more pixels.[/font]
Logged

Rainer SLP

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 727
    • RS-Fotografia
"Pixel Count And Future Imaging Chips" aticle
« Reply #5 on: December 30, 2002, 05:46:21 pm »

[font color=\'#000000\']Hi Ketil,
and for what should this be good?[/font]
Logged
Thanks and regards Rainer
 I am here for

Ray

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #6 on: December 30, 2002, 07:34:59 pm »

[font color=\'#000000\']To make the most of the image circle you need a square format. Any other format (apart from circular) is wasting potential image. For obvious reasons circular photos would be impractical. However, if we were one-eyed creatures we might have gravitated towards a preference for the circular format.[/font]
Logged

b.e.wilson

  • Full Member
  • ***
  • Offline Offline
  • Posts: 104
    • http://science.uvsc.edu/wilson
"Pixel Count And Future Imaging Chips" aticle
« Reply #7 on: December 31, 2002, 01:17:53 am »

Quote
[font color=\'#000000\']Anything fine, like tree limbs in the sunlight were covered with [color antialiasing].  The more sunlight on the subject the worse the effect.  [/font]
[font color=\'#000000\']Do you think that without the antialiasing filter the computer in the camera could recognise the presence of color antialiasing at high-contrast edges, and remove them in post processing? This seems a much more practical solution than the expensive filter, and perhaps Kodak's firmware does just this?[/font]
Logged

Paul Caldwell

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #8 on: December 31, 2002, 08:13:57 pm »

[font color=\'#000000\']Hello B E Wilson,

Kodak actually added a morie step to the photodesk software that works on the 760 raw files, but didn't go back to the 660 unfortunately.  I don't know how well it works.

As for the camera working on it while the image is processed, I don't see why it couldn't be done, but I guess I like the way that the 660 allows me to take it out later, and I can control the effect.  

On the 14n, its still anyone's guess as to how bad if any this problem will be.  I am hoping it will be small.  

Paul Caldwell[/font]
Logged

Dan Sroka

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 597
    • http://www.danielsroka.com
"Pixel Count And Future Imaging Chips" aticle
« Reply #9 on: January 01, 2003, 11:54:38 am »

[font color=\'#000000\']Round books... I love it. But man, can you imagine trying to keep the printing presses in registration with those things spinning around?  :)

Yes, we think in squares. At least where chips are concerned. Actually, most chips are processed out of circular wafers that are 200mm or 300mm in diameter, and are very very expensive. They are cut in squares because this is more efficient. Think of using a cookie cutter -- when you cut circular cookies, you end up with a lot of waste. Unlike dough, the unused silcon can't just be rolled up into a ball and used again. Plus, you cannot eat it. (Sorry, our puppy woke us up way too early this morning and I am still tired.)[/font]
Logged

MatthewCromer

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 505
"Pixel Count And Future Imaging Chips" aticle
« Reply #10 on: January 01, 2003, 07:04:38 pm »

[font color=\'#000000\']Speaking of squares. . .

I sure wish they would make square sensors.  With an option to do an in-camera crop to various format sizes along with a live indication in the viewfinder.  I shoot 75% vertical format and find the notion that I have to rotate the camera constantly vaguely ridiculous.[/font]
Logged

hesham

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #11 on: December 24, 2002, 04:33:09 am »

[font color=\'#000000\']I have a some "thinking aloud" questions about the ideas in the article :
- If the 11MP sensor is close to the resolving limits of Canon's best lenses, why are there 20MP digital backs for midium format? My understanding is that lesnes made for medium format are even less sharp that those of the 35mm format. I remember Michael once mentioned that while the 1Ds is almost a match 4*5 equipment, it still can not match his 6*7 equipment. But 6*7 film performs better than smaller film because it has more image capturing elements (grain), why can not we then be able to get the same performance of 6*7 with a 36*24 mm  format sensor that has enough pixels? Is it because grain is randomly distributed while pixels are not?[/font]
Logged

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #12 on: December 24, 2002, 11:46:16 am »

[font color=\'#000000\']I knew this was going to happen  

I don't have the time to answer every question in detail, but let me address a couple of topics here. I've already answered a couple of others in the new Q&A section of the artcile itself.

"(1) If the 1Ds is already close to capturing what Canon's best lenses are capable of, why the need of an anti-aliasing filter? I thought aliasing resulted from the sensor not having the resolving power to properly record high frequency information."

The proglem is the opposite of what you state. The reason that the AA filter is needed to to reduce the resolution and cut off the high frequencies. Thus the 1Ds, for example, actually is capable of recording higher resolution that it does because of its AA filter.

The Foveon technology is for me an unknown qualtity at this time and I don't feel qualified to comment on it. The thing to keep in mind is that a Bayer matrix chip (like all others than the Foveon) has more actual pixel sites in the luminance channel. The Bayer matrix doesn't reduce resoltuion, it simply uses all the pixels that are there to extrapolate colour information. My concerns with the Foveon chip at this time is that though it does an excellent job on colour, it has poor noise characteristsis. The light is quite attentuated by the time it gets down to the lower two layers, thus generating higher noise levels. All is compromise.

Yes, the D60 actually has higher actual resoltuion than the 1Ds. In fact it's remarkable imageing cabilities have been somewhat overshadowed by the 1Ds. But, because of it's smaller sensoreit means that for a given reproduction size it needs to be enlarged more, and that's where the 1Ds wins out.

Finally, and because I'm running out of steam and time, please keep in mind that pure math tells us little about the real world of imaging. I stopped even reading statistics for products long ago. "Show me the money" — as the saying goes, or more to the point, show me the image. That's all that counts.

You can't always reconcile what theory states, with what measurements record, against what we actually see. I know that lots of folks don't want it to be so, but that's been my exprience. That's why I do the tests and reports that I do. Otherwise we could all read Phil Askey's tech reports on DP Review and go away happy. Unfortunately they tell us little about what we will see in the real world, and that's what I try and write about. I can't always explain the "why" of what I see, but there it is.

Happy holidays,

Michael[/font]
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
"Pixel Count And Future Imaging Chips" aticle
« Reply #13 on: December 24, 2002, 07:33:21 pm »

Quote
[font color=\'#000000\'](1) If the 1Ds is already close to capturing what Canon's best lenses are capable of, why the need of an anti-aliasing filter? I thought aliasing resulted from the sensor not having the resolving power to properly record high frequency information.

...

(6) There's a simple formula used by astronomers to calculate the maximum resolution of their telescopes that diffraction will allow at a given aperture. The figure varies according to the wavelength of a particular colour, but 1500/F stop = max. lp/mm is often quoted.[/font]
[font color=\'#000000\']On point (1), the Kodak 14n will not have an anti-aliasing filter: do they think it is past the lens resolution limits?

Thanks for point (6) about difraction limiting of sharpness; I have heard more pessimistic versions though, suggesting that diffraction limiting with 35mm starts at about f/8 or f/11 (I have even heard f/5.6 as the typical sharpest f-stop). Thus to fully exploit any significantly higher resolution, apertures have to get larger than that, at a significant cost in depth of field: even more so if you tighten DOF standards to match the overall higher sharpness. So is all photography at better than 35mm sharpness optically limited to having only moderate-to-shallow DOF?[/font]
Logged

Ray

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #14 on: December 26, 2002, 04:44:20 am »

Quote
[font color=\'#000000\']So is all photography at better than 35mm sharpness optically limited to having only moderate-to-shallow DOF?[/font]
[font color=\'#000000\']BJL,
I believe that shallow DoF is a real problem with large format cameras. 8"x10" field cameras with a standard 300mm lens have to be stopped down to arounf F64 to achieve the same DoF as a 35mm camera at F8, and of course, resolution at F64 is not too great because of the effects of diffraction. On the plus side, the film does not degrade lens performace significantly and the image doesn't have to be enlarged to the same degree for the same size print. Similar trade-offs exist with other formats, so F16 for 6x9cm and F32 for 4x5" are required for similar DoF as 35mm at F8.

But it doesn't work out that optimum performance of standard lenses of different formats are the same at equivalent F stops. So a 300mm lens for 8x10" is likely to have a maximum aperture of F8 and optimum performance at F22 which is equivalent to F2.8 to F3.5 in 35mm terms. The 300mm lens at F22 is probably limited only by diffraction. It would be rare to find a 35mm lens that is diffraction limited at F2.8. I guess the tolerances are too tight and the manufacturing too difficult for such a lens.[/font]
Logged

samirkharusi

  • Full Member
  • ***
  • Offline Offline
  • Posts: 196
    • http://www.geocities.com/samirkharusi/
"Pixel Count And Future Imaging Chips" aticle
« Reply #15 on: December 27, 2002, 04:55:34 am »

[font color=\'#000000\']It is unfortunate that many people get into heated discussions regarding stuff like resolution without having bothered to obtain first hand knowledge on today's realities. It's actually quite easy. Most of you own DSLRs and decent lenses. Just download Norman Koren's lens test chart and use your best lens on it. I did, with several prime lenses. You will immediately learn that all lenses readily available to consumers, ie at the standard of your best lens, are actually horribly, disappointingly lousy, at anything a couple of stops away from f8. The superteles may indeed perform better wider open but I do not own any to test. The Nyquist critical sampling theory says that you can "satisfactorily" sample a line pair by 2 pixels, basically 50 lp/mm with 10 micron square pixels. With the Bayer array in a D30, plus its anti-aliasing filter, and its 9.5 micron pixels, and extensive chart testing, I have come to the conclusion that such cameras do resolve close to Nyquist on a 45 degree diagonal, and at about 85% of Nyquist on the horizontal and vertical directions, in white light. These numbers are indeed in tune with Phil Askey's measurements on resolution charts for the D30, the D60 and the 1Ds. The Foveon array does go to full Nyquist and beyond that has few artifacts, very similar in behaviour to an astro CCD. A secondary result of my testing is that the Canon prime lenses are not really up to any of these sensors when one starts with 10% contrast test targets (Norman had these in an earlier version of his test chart), nor as you move away from f8. By f4 or f16 you start seeing clear deficiencies. It is indeed possible and very likely that the best Canon lenses, at f8, do outresolve both the D60 and the 1Ds. Secondly there is indeed an advantage for resolution at having a sensor that over-samples, by say, a factor 2 compared to Nyquist. Any planetary astrophotographer knows this. But there's no way you can get around losing in effective ISO and increasing noise by using smaller pixels. One day, sensor technology may get so good that we routinely have virtually noise-free ISO 6400 and a new compromise situation may be to use smaller pixels and sacrifice ISO. That day has not arrived as yet. For the moment, pixels in the range 7 to 10 microns square seem to be a very good compromise, keeping in mind ISO, noise and the resolution of available lenses. Just shoot Norman's test chart with your best lens at f4 and become a disbeliever in all those claims of 80+ lp/mm lenses. A D60 on the 45 degree diagonal will show up any lens that is resolving less than 65 lp/mm. Hopefully your best lens will show you 60+ lp/mm at f8. Remember that Norman's current chart has practically 100% contrast bars... And at f22, with ANY lens, you will see the D60 sensor outresolves your lens. We also have the absurd situation that people in almost the same breath start talking of Depth of Field. Let us just admit that a lens focuses on a single spot, nil depth, except if we accept some fuzz. A 0.3mm CoC represents 3 lp/mm in a print. Let us say you accept that as you go about calculating Depth of Field. If that print is a 10x magnification of a 1Ds image (10x15" print) then you have accepted 30 lp/mm on the sensor as "sharp"! And we worry that the 1Ds sensor does not have fine enough pixels? It does resolve 50 lp/mm (2400 lines per picture height measured by Phil Askey). IMHO the pixels are fine enough for current lenses. I would chase higher ISO first, before making them finer. Would it not be fantastic if we could shoot at ISO 1000 and get ISO100 1Ds quality? Throw away tripods... That would be much more appealing than to resolve just a bit more at f8... Anyway each to his own tastes...[/font]
Logged
Bored? Peruse my website: [url=http://ww

Paul Caldwell

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #16 on: December 27, 2002, 11:09:15 am »

[font color=\'#000000\']Good essay,

I wanted to add some real world experience on the AA subject.  I have always believed that the AA filter is hurting the overall image in that you are losing the finer details and my recent work with the 660 Kodak has proven this.  

I have always understood the issue of the AA filter as Ray wrote it, that in effect you are blurring the image to keep the pixels from fighting over the finer details, which creates color aliasing or the christmas tree lights.  I also understand that the issue of frequency morie is also countered with such a filter.  

In my work, outdoor shooting, I almost never run into Frequency morie as they tend to show up fabrics.  I have encountered every so often in the outdoors but most of my outdoor subjects are made up of such random subject matter its very hard to get frequency morie.

The color aliasing issue however is very much a problem.  The S2, and D1x both have low pass filters over the CCD to remove this.  When I started shooting with a 660 Kodak, the first thing I did was remove the AA filter to see exactly what the difference would be.  The results IMO were amazing.  Much better fine detail, edge detail, etc. however with a penalty of color aliasing.  Anything fine, like tree limbs in the sunlight were covered with it.  The more sunlight on the subject the worse the effect.  The problem is a real catch 22 as you are getting a huge boost in overall detail, but the color aliasing basically ruins the image.  However with the simple tool from Camera bits, Quantum Mechanic, you can easily remove the color aliaisng 100% with 0% loss of image detail.  For my work, outdoor scenic shooting, this is the perfect solution.  Yes I have to add a extra step but it's worth it IMO.  I can now print a 6mp image from the 660 with only a tad of sharpening at 180 dpi, something I could never do with my d1x images.  

The 660 is big, bigger than the 1ds, but I am willing to take the weight as it bests anything I have shot digital in over 4 years.  

As for the AA filter in the14n.  I have to 2nd Michael's point as the cost of a full frame AA filter would have to be close to 2K.  The Kodak AA filters are 900.00 and they are not full frame.  But I also have faith that Kodak realized that their sensor is 1 full um smaller in pixel sixe than the 1ds and thus might not require a AA filter since the smaller pixels may not have the problem that the 660's or 1ds's do.  I hope that in the basic design, Kodak hoped to keep the price down by using the new CMOS chip.  Remember on the 760 Kodak shiped it with only the IR filter again as a way to cut costs.  They added a morie reduction step to photodesk also that only works with the 760 raw files.  

If Kodak ships a camera that has as much color aliasing as the 660 does without a AA filter, then IMO they have made a big mistake.  The average person will not know how to get it out and it does at first make your image look terrible.   For my work to be honest, I would take it either way as have seen for myself just how sharp a digital camera can be without any filter.

Yes I realize that you can sharpen after the fact, but I also don't believe that you can ever get it all back.  That which is lost to the AA filter at time of capture, will never all come back, in regards to finer details.

Paul Caldwell[/font]
Logged

Ketil Samuelsen

  • Guest
"Pixel Count And Future Imaging Chips" aticle
« Reply #17 on: December 30, 2002, 04:34:17 pm »

[font color=\'#000000\']Why cant we get an rounded image chip that would capture the whole image (circle) projected from the lens?
Is there some physical reason, or are we just so used to think in rectangulars?[/font]
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
"Pixel Count And Future Imaging Chips" aticle
« Reply #18 on: December 30, 2002, 06:01:23 pm »

[font color=\'#000000\']I had briefly wondered about the "whole image circle sensor" idea to allow the maximum possible image size for image shapes other than 3:2; for example a 4x5 shape at about 27x34mm (rather than cropping 35mm's 24x36mm to 24x30mm).

I suspect the mirror size would prevent this for full frame 35 format, and larger mirrors would interfere with some existing lenses. Otherwise I would be interested in 26x34.6mm, filling the 43.3mm image circle of 35mm with a generally more useful 3:4 aspect ratio.[/font]
Logged

sergio

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 666
    • http://www.sergiobartelsman.com
"Pixel Count And Future Imaging Chips" aticle
« Reply #19 on: December 30, 2002, 08:15:11 pm »

[font color=\'#000000\']and circular magazines and books.[/font]
Logged
Pages: [1] 2   Go Up