Pages: [1]   Go Down

Author Topic: The D60 and CanThe D60 & Canon EF 1.4x ll Extender  (Read 4381 times)

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« on: December 29, 2002, 09:25:18 am »

[font color=\'#000000\']Oops! Can't seem to edit the topic title.[/font]
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #1 on: January 29, 2003, 08:24:04 pm »

Samirkharusi,
Well, thanks for responding. I've been wondering if the garbled subject title put people off, but it's good to get confirmation that I'm right.

I should have realised as soon as you mentioned Rayleigh's law, that you're into astronomy. Living away from the city lights, I often marvel at how clear the night sky is and occasionally fanatasize about getting a REALLY powerful telescope. Unfortunately, I think I'd need more than one life to engage in this pursuit.

Your point about oversampling is interesting. Seems to me, that is the goal in any digital system, to raise the Nyquist point (2 pixels = 1 line pair) above the resolution of the lens. Using a teleconverter with good lenses can achieve that goal with current systems. We need to get to the stage where ALL teleconverters are completely redundant with even the best Canon lens (with a Photodo rating of 4.8).
Logged

samirkharusi

  • Full Member
  • ***
  • Offline Offline
  • Posts: 196
    • http://www.geocities.com/samirkharusi/
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #2 on: February 19, 2003, 12:56:06 am »

Hi BJL, Your questions can be answered scientifically and precisely from the physics but I'll have to put in quite a bit of research. Nevertheless I'll try to answer them from what I can recall from memory (thus I cannot vouch for total accuracy and anyone more familiar with this stuff is most welcome to jump in...).
1st Binning: Most effects in photoelectricity emanate from a very old Einstein derivation (for which he got the Nobel Prize, not for Relativity) that photoelectric noise is proportional to the square root of the signal. If you have a signal that releases 100 photoelectrons then the noise-curve will be 10 electrons wide (10%), and a signal that releases 10,000 electrons will have a noise of 100 electrons (1%) from the same CCD. That's basically why a High ISO setting (less light thus less signal) has more noise than Low ISO, OK, in a sort of roundabout way. So a binned quad-pixel will collect 4x the signal and thus have 2x the Signal-to-Noise ratio of an unbinned pixel. I do not recall a 4x improvement in S/N, but the sqrt. This is often referred to as on-chip binning. Binning by post-processing software (eg Photoshop) does not deliver quite the same result but progresses in a similar fashion. Basically the on-chip binning tends to elevate more signal to above other noise sources (amplifier, read, analog/digital converter, etc) that Photoshop binning cannot since such low level signals are already too buried in these other sources of noise. Thus S/N considerations will always lead a chip designer to use large pixels (or if Foveon then binned pixels) for a camera that is meant to be a speed demon (ISO 6400, etc). The loss of course is in resolving power. If we chug along this route it would seem to be reasonable that if Foveon survives over the next several years, the technology could indeed give us a full-format camera that could be used at very high resolution (say, 16 megapixels) for normal use and simply switched over with 2x2 binning to a 4 megapixel speed demon. Is this why Canon is rumoured to be interested in X3-type technology?
I do have one query left over though for which I never did get a satisfactory answer since I have never personally had contact with anyone who designs any sensors. The Bayer array is usually GRGB. With current very fast processors it should be possible to deconvolve signals from an LRGB array (L being white light). I have actually measured the pass through of astro-CCD interference filters. One might expect that an R or a G or a B should pass through a third of white light. My crude testing showed that they pass between a 5th and a sixth of the L signal. But of course one has to make allowances for Infra Red that is unwanted in a normal camera. The dye filters used in Bayer arrays are likely much worse.  Basically I estimate that an LRGB array should have some 30% higher effective ISO than a GRGB Bayer. There has to be a reason why they do not build such arrays, but I've yet to hear of a scientifically convincing reason...
Logged
Bored? Peruse my website: [url=http://ww

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #3 on: February 23, 2003, 12:16:11 pm »

Quote
... are you implying that Canon could design a 22MP version of the 1Ds with current technology, that would perform pretty much the same in terms of S/N and dynamic range except that all the ISO settings would have to be downscaled by a factor of four ...
First, thanks to Samirkharusi for his answer to my question and reminding my about the "Einsteinian" square root statistical behavior of noise. So binned quad photosites willl be twice as bad as a single bigger site, but only if the noise level per photo-site (in typical electron number) is the same; but perhaps smaller sites and smaller, lower power components on the CCD or CMOS can also reduce the noise level per site somewhat? Pure thermal noise might also be proportional to square root of photo-site size, for example, by the same "Einsteinian" noise behavior that Samirkharusi mentioned, and with that simplified approximation, it would be an exact wash between the options of "fewer bigger and "more smaller with binning" options, with the latter simply adding the flexibility of turning off the binning when resolution is more important that noise levels.

On Ray's question quoted above, pending a more expert answer, I am not sure that the S/N or dynamic range [or "exposure latitude" for the old-timers] can be maintained with smaller photosites just by reducing sensitivity setting [or "ISO" again for the old-timers], due to the smaller upper limit on the number of electrons thay can hold before "blow-out" [about 1000 electrons per square micron from a few spec's I checked].

If that "maximum electron count signal" shrinks faster than the typical "electron count noise" with shrinking pixels, S/N shrinks. Perhaps one could work out a pattern from the data sheets for sensors at sites like Kodak's and Sony's
http://www.kodak.com/global/en/digital/ccd/
http://www.sony.co.jp/~semicon/english/90203.html


Relatedly, FujiFilm has just launched a clever sounding idea (Super CCD SR) for maintaining good dynamic range/exposure latitude with small photo-sites: having a mixture of larger high sensitivity photodiodes with smaller lower sensitivity ones, with the lower sensitivity sites used to extend the high end of the exposure range past the point where the high sensitivity sites have "blown out". This still seems to require reducing electron noise per site, which Fuji claims their Super CCD design is good at; no opinion from me as to how well this will actually work.

[As an aside, my gut feeling is that the sort of strategies discussed here might in the end count for more than the "purist" X3 ideal of recording all three primary colors at each photo-site in order to eliminating "Bayer pattern interpolation", because to me, what counts is reducing the total ill-effect of the accumulated "noise, distortion, blurring and other infidelities" of  the many steps in the process, rather than worrying about complete elimination of any one source of "infidelity", and Bayer pattern interpolation is just one amongst many interpolation/smoothing/blurring steps that happen on route from the light that reaches the camera lens to the image that enters my optic nerves.]
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #4 on: February 27, 2003, 04:30:21 am »

Quote
Fuji's diagrams suggest that the small and large photosites sit under a single microlens occupying the octagonal region where there used to be just one photosite on earlier SuperCCD's, and read the same light blurred out over both of them: the resolution length scale is still the combined size of the big/small pair, not the size of the smaller photodiode.
BJL,
Seems like a clever innovation. Fuji's comparative images would certainly suggest a very worthwhile improvement. But there's a big question mark, isn't there? We're used to clever innovations trickling down. What's the catch? Less 'fill factor' maybe. I don't know.

Nevertheless, having bought a Nikon 4300 as a present just 2 1/2 months ago, I wish I'd held off.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #5 on: December 29, 2002, 09:10:44 am »

[font color=\'#000000\']I've always considered teleconverters to be of dubious value. Once the image has passed through the main lens, how can it be improved by inserting another piece of glass between the lens and the film or sensor? Well, of course it can't - unless the converter's been specially designed to correct an aberration of a specific lens - which, I believe, is not the purpose of teleconverters. The teleconverter enlarges a portion of the image whilst simultaneously reducing the resolution of the image by the same degree. Net effect, at best is zero and that's with a perfect, aberration free converter. I'm talking about the image before it reaches the film or sensor.

So what's the use of a converter, I ask myself. Well, it seems to me it helps overcome the deficiencies of the film or sensor. If the film or sensor is not able to capture the full resolution of the lens (and what film or sensor is?) then reducing the resolution (in terms of lp/mm) with a converter should allow more detail to be captured. With film there's an additional advantage. Even though no more detail might be recorded (because of low quality converter and/or lens) the film grain will be smaller in relation to the enlarged size of the image. With a high quality digital sensor such as the D60's, grain is not an issue, so the only benefit must be more detail.

Now some people seem to think that sensors such as the D60's and the 1Ds's are very close to the limits of the lenses they are using and that greater pixel density will be of little value. Well, that might be true, depending on the lenses in question. I suspect it's true of general consumer grade zooms. But I'm almost certain it's not true for high quality primes. It's true of the long end of the 100-400 IS zoom (with the D60 at least, not sure about the 1Ds).

It's definitely not true of the TS-E 90mm. This is a tilt & shift lens which gets an overall rating by Photodo of 3.9. Not bad! Canon have better lenses if one discounts the T&S mechanism, but this is a good quality prime which gives an idea of how much room for improvement exists with the D60's sensor. If the D60's sensor is really capable of recording the full detail this lens is capable of, then a teleconverter would be of no use. It could do no more than degrade the image. It would be better to crop the wider image from the lens itself and enlarge it to the same size as the teleconverted image. All converters will degrade the original aerial image (after it's passed through the main lens) to some extent. No getting away from this. Having just carried out a number of tests with my TS-E 90, I can state categorically that the TS-E 90 pluse 1.4x converter with D60 produces a MORE DETAILED result than with the just the lens.

I've taken about 36 shots at various apertures of finely detailed foliage, distant scenes at infinity, and test charts downloaded from Norman Koren's web site. (Thanks! Norman. Very revealing charts.)  There's no doubt about the results. The charts make it easier to quantify the differences, but the differences are noticeable in all the shots at all apertures between F2.8 and F11. Incidentally, the converter even with lens set at F2.8 shows more detail than the lens itself at F8, which as usual is the optimum aperture for this lens (although F5.6 and F11 are almost equally optimal).

If anyone thinks I've drawn some wrong conclusions here, you'll let me know won't you?[/font]
Logged

samirkharusi

  • Full Member
  • ***
  • Offline Offline
  • Posts: 196
    • http://www.geocities.com/samirkharusi/
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #6 on: January 28, 2003, 12:27:38 am »

You are of course right, as your eyes have told you. But there's a always a price to pay. As any amateur CCD astro-imager knows, when working with diffraction-limited optics, you match the pixel size to the diffraction limit for optimal use of your equipment at the Nyquist criterion. This is where you get an excellent balance of resolution and "speed" (ISO equivalence/noise). However resolution does not stop at the diffraction limit. It goes on and on, albeit at lower and drastically lower contrast (MTF). Photodo explains this in fair detail. So when you can afford to work with a lower "speed" because the object you are imaging has plenty of light (Moon and planets) you prefer to go to 2x or 3x Nyquist - over-sampling. When this is combined with compositing several images on top of each other it becomes, indeed, possible to go beyond the "diffraction limit" or even atmospheric jitter. The latter needs short exposures to freeze, so jacking up the magnification works against you for jitter but with you for resolution. Compromises, compromises... Of course with mass produced camera lenses, even the super-priced ones, we are usually way below the diffraction limit anyway, but as you found out, it should be possible to eke out extra resolution by over-sampling/magnification, especially if in addition you contrast-stretch images of, say, a lens test chart, or a planet for that matter. Personally I have found that astro CCDs need less magnification than an equivalent Bayer-array DSLR, the Foveon effect, and while I can eke out my highest resolution planetary images with an 8" scope at 6000mm f/30 on my astro CCD (7.5 micron pixels) I need to go to 24000mm f/120 with a D30 (9.5 micron pixels + Bayer array). OK, I could probably settle at something a bit less but I do not have equivalent quality teleconverters (eyepieces for eyepiece projection) for the in-between. At these magnifications one gets a lot of mush in the images and a lot of post processing is required to get something usable, not always possible with terrestial images of birds, etc. So, will we have over-sampling in consumer DSLRs? Yes, once we have noise-free behaviour at very high ISOs, or when the manufacturers wish to say "5 megapixels" in the P&S stuff marketed to the uncritical. Using the same fabrication technology, pixels half the size have 4 times less sensitivity. No way around that bit of physics linking collection area to the number of photons captured. So, which is better, a 1Ds as is today, or with 2x over-sampling for a very minor improvement in resolution (<< 2x) but noise characteristics that are 4x worse? And as lens designs better approach the diffraction limit then a new question comes up. Which is better, a smaller sensor overall with smaller pixels yielding a smaller camera system, or sticking to full-35mm-format and having much higher ISOs available? ISO 6400 for everyday use will be great! Compromises, compromises... Eventually it's the customer who'll decide.
Logged
Bored? Peruse my website: [url=http://ww

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #7 on: February 18, 2003, 08:54:32 pm »

samirkharusi,

   you seem like a person who could answer this, at least for the grayscale sensors of astronomy.

If you compare two sensors, one with half the linear photosite size of the other and about four times the noise level at each photosite, what is the best one can do in terms of noise by then averaging the values from each 2x2 set of the smaller photosites to produce a single output pixel [binning?], compared to the direct output of the bigger photosites? Can you reduce the noise level by the same factor of four? Can you do this while also filtering out some aliasing effects?

What if one takes each 2x2 chunk of Bayer pattern data (one red, one blue, two green) and produce a single RGB pixel output (averaging needed only for the green?), for comparison to an "X3" array (Foveon style) with photosites of four times the area each recording all three colors. Since so far Foveon's X3 sensors have more noise than Bayer pattern photosites of the same size (perhaps due to having to share the light between three stacked sensors?), this "binning" is one option I would like to compare to the X3 strategy, though Bayer pattern sensors probably work better with other interpolation/smoothing algorithms than with this crude one.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #8 on: February 22, 2003, 05:57:33 am »

Quote
Using the same fabrication technology, pixels half the size have 4 times less sensitivity. No way around that bit of physics linking collection area to the number of photons captured.
Samirkharusi,
Just so I'm clear on this point, are you implying that Canon could design a 22MP version of the 1Ds with current technology, that would perform pretty much the same in terms of S/N and dynamic range except that all the ISO settings would have to be downscaled by a factor of four, so that the lowest 50 ISO of the current 1Ds would become 12 ISO, 100 ISO would become 25 ISO and the top two highest ratings would probably be discarded?

If this is true, it raises some interesting comparisons. There's a lot of interest in how far the 35mm digital format can go. Will it ever equal the quality of an LF 8x10 or 6cmx17cm landscape camera? I would suggest that if we're prepared to accept just a small portion of the inconveniences and difficulties that the large formatter has to contend with (ie. if there's a market for such a camera) then such a camera would seem to be theoretically possible.

I've never used an 8x10 field camera but I can imagine the difficulties compared with the 'miniature' 35mm format. To get a decent DoF it's necessary to use F64 (the Ansel Adams preferred F stop) roughly equivalent to a 35mm F8. Even with a 400 ISO film on a bright day, F64 means an exposure of around 1/15th of a second (if my maths is correct). With Provia F100 and F22 for maximum resolution and large, tack sharp prints, exposure is still 1/30th at best.

So how would a full frame 22MP full frame 35mm sensor compare at maximum performance at ISO 12 and F2.8 (equivalent DoF to LF F22)? Well, on a bright day about 1/250th. What's the problem? Okay, most lenses are not too good at F2.8 - but ALL lenses?
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #9 on: February 25, 2003, 09:27:44 pm »

Quote
[So binned quad photosites willl be twice as bad as a single bigger site, but only if the noise level per photo-site (in typical electron number) is the same;
BJL,
Not sure that this is right. There's some interesting information on binning at http://www.roperscientific.com/library_enc_binning.shtml

As I understand it, there are three broad sources of noise.

1. Photon noise which Samikharusi referred to (due to fluctuations in the photon arrival rate at a given point - if only these particles would behave themselves!!)

2. Dark Current noise, mainly thermally generated stray electrons that can be reduced by cooling.

3. 'Read' noise induced mainly from the on-chip preamplifier as the electron charge is read.

Binnining reduces the Read noise but not the other two types of noise, and it's easy to see why. Binning four photosites means one 'read' as opposed to four, or 1/4 of the read noise. Binning 16 photosites requires only one reading instead of 16.

So the question arises, when is read noise a significant proportion of the total noise? Answer: in low light conditions.
For example, let's say only 9 photons are impinging upon a photosite. Photon noise, according to the Einsteinian square root law, is 3 photons. Dark noise is perhaps 1 electron. Read noise is perhaps 5 electrons. Result: nothing remains of the original signal. It's lost in the noise. Let's try 16x binning. Combined signal is 16x9=144 photons. However, photon noise is not the square root of 144 but 16x the square root of 9 = 48 photons. Dark noise is 16x1 electrons but read noise is still only 5. Total noise is 69 electrons which leaves 75 photons from the original signal. Now that's a worthwhile improvement.

However, it seems to me for binning to be really useful it has to be selective. There's no advantage in reducing the overall resolution of the image in order to acheive more detail in the shadows. We need a chip which is able to bin almost instantaneously and on demand, only those areas of the sensor which correlate with the shadow areas of the image. I feel as though I'm in Star Trek territory here. I've got no idea if this is possible or whether something similar is already being done.

On the issue of S/N and DR of the smaller pixel, I think you're right. A photosite can hold only a certain number of electrons (or charge) depending on it's size. A 10 litre bucket can not hold more than 10 litres, and it may not even be advisable to fill it to the top. But there has to be some way around this. Continuing with the analogy of a water bucket for each photosite, the problem it seems to me is that many of the buckets, for the average image, are going to be less than half full. Some are going to be virtually empty and some are going to be overflowing or close to it. What a waste!

I imagine there's there's an optimum fill level for the bucket which acheives maximum 'linear' signal-to-noise - say 75% full. Now, as Moore's law continues to operate (and remember that a huge advantage of the CMOS chip is that it uses the normal computer fabrication systems which allows for all sorts of add-on processing features to be included on the sensor) I envisage that it might eventually be possible to give each photosite a variable sensitivity that changes automatically (and almost instantaneously) according to the intensity of light that falls upon it so that each bucket is at least reasonably full. The 'real' information about the image would then consist of a fairly narrow range of variability in the fill level of the buckets, plus very specific information regarding the changes in the sensitivity of individual photosites. In the process of decoding this information and downloading the image, the compressed levels would be restored in a very precise manner at the pixel level from the recorded data for each pixel's individual sensitivity.

I imagine for such a system to be practicable, there would have to be a pre-exposure along the lines of the red-eye-reduction pre-flash. The major flaw in such a system, as I see it, is the possibility of movement between the pre-exposure and the actual exposure. If one keeps the pre-exposure very short, say 1/1000th sec, it will only provide details about the highlight situation. To get pre-warning about the shadow situation would require a much longer pre-exposure allowing for the possibility of blurring and smudging as a result of some pixels having an inappropriate sensitivity. On the other hand, we all accept don't we, if you want the best results a tripod is often required.

Now, where do I apply for my Nobel prize? (Only kidding!!)

ps. Not sure why the Fuji concept of the smaller less sensitive pixel attached to the larger pixel doesn't appeal to me. It seems clever but not very elegant. What happens to the overspill from the main pixel? How is it contained? Is the main pixel switched off as it reaches saturation and if not, what about blooming? How is the resolution of the lens compatible with this smaller pixel which, if we're talking about P&S camers is likely to be very, very small?
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #10 on: February 25, 2003, 11:22:27 pm »

Quote
ps. ... the Fuji concept of the smaller less sensitive pixel attached to the larger pixel ...

a) What happens to the overspill from the main pixel? How is it contained? Is the main pixel switched off as it reaches saturation and if not, what about blooming?

 How is the resolution of the lens compatible with this smaller pixel which, if we're talking about P&S camers is likely to be very, very small?
Thanks for the reference on binning; I will need some time to read and understand it, and combine that with otehr S?n reduction ideas that come just form averaging of several measurement with random noise in each.

Selectively binning on low light areas sounds almost optimal if it is doable.


On the Fuji idea

a) I have heard that some CCD's avoid blooming with a current overflow mechanism, so that when a photosite is full, the extra electrons get drained away safely without overflowing onto nearby photosites, so maybe Fuji does that.

 Fuji's diagrams suggest that the small and large photosites sit under a single microlens occupying the octagonal region where there used to be just one photosite on earlier SuperCCD's, and read the same light blurred out over both of them: the resolution length scale is still the combined size of the big/small pair, not the size of the smaller photodiode.
Logged

wolfy

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 93
The D60 and CanThe D60 & Canon EF 1.4x ll Extender
« Reply #11 on: March 01, 2003, 03:53:00 pm »

( to Samir, et al)

HELLOOOOO! (tap, tap, ...is this megaphone working?)

HOW DO YOU GUYS BREATHE UP-THERE?

Joe Lay-person, here.

I don't know whether-or-not I'm learning anything, ...but I'm sure as H***  impressed!  

Thanks for posting this stuff!!

Larry
Logged
Pages: [1]   Go Up