Pages: 1 [2]   Go Down

Author Topic: Big Sensors versus Small Sensors  (Read 10113 times)

cunim

  • Full Member
  • ***
  • Offline Offline
  • Posts: 130
Big Sensors versus Small Sensors
« Reply #20 on: March 11, 2010, 11:34:30 am »

Dynamic range is complex because it is so dependent upon the detection situation.  For example, at high light throughput levels lens flare becomes limiting.  At low levels, the detector read noise is limiting for short exposures and shot noise for long exposures. This all interacts with wavelength, the angle at which rays strike the detector, detector surface treatments.......  Given all this, just how are we to set up the situation to yield a meaningful (replicable and consistent) DR value?  

Engineers use a definition that removes uncontrolled variables as much as possible.  It allows them to compare detectors and entire optical trains with some degree of objectivity.  Note I say only some degree, because at high bit densities we are looking at very minute differences that tend to fall within the noise floor of the measurement technology.  There will be some uncertainty.

Point is, don't expect an engineering DR figure to have much to do with your photography.  The DR number reflects rigorously defined measurement protocols and specifically excludes subjective factors such as perceived image quality.  

The DR figure is very useful with well defined applications.  For example, if you needed an imaging system for low light you would know to select a detector with high QE,  slow readout for read noise and cooling for shot noise.  Chip packages are well specified on these parameters so you could use DR values to make a direct comparison.  Similarly, the lens should have low internal fluorescence/reflectance and various other suitable characterisitcs.  Again, these data are available and scientists select optical systems on that basis every day.

The problem is that we do not have a clear definition of what we need for photography - nor do we have an accepted measurement protocol for DR.  Therefore, it is not entirely reasonable to expect to be able to look up a CCD or CMOS data set that correlates well with perceived image qualtiy.  We are left to make subjective decisions and tend to depend on reviews from trusted sources.  We are, therefore, doomed to the endless arguments.
Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Big Sensors versus Small Sensors
« Reply #21 on: March 11, 2010, 11:47:41 am »

Quote from: Ronny Nilsen
Science is full of examples of how even large groups of honest and good scientist can fool them self.
Ronny

Correct interpretation of data has always been a problem for many working scientists who are not properly trained in statistical reasoning and pattern recognition. I shall give an example, with hopefully some close to realistic numbers. Suppose that the prevalence rate of HIV among a population is 0.01%, an HIV test procedure is 99.8% sensitive (i.e., 99.8% times it is right on those people whom we know have been inflicted with HIV), and the false positive rate is 0.01% (i.e., those which are identified to have HIV where as in actuality they didn't have it.). Then the probability that a person really has HIV if the test says that the person has HIV is actually almost 50%. I.e., you can flip a coin and decide if that person has HIV or not! However, don't get scared, because, the probability that a person does not have HIV if the test indicates no HIV is 99.999%, i.e., you are almost certain that that person does not have HIV.

The tragic incident of Sally Clark in England is an example when she was thought of murdering her children which the Royal Statistical Society identified as a misuse of statistics in court procedures.

Many of the signal processing algorithms are special cases of Bayesian statistics. For e.g., our beloved Lucy-Richardson approach for image sharpening. However, many signal processing algorithms make simplifying assumptions in many procedures that make the interpretation of data different.

Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Big Sensors versus Small Sensors
« Reply #22 on: March 11, 2010, 12:10:22 pm »

Quote from: joofa
Correct interpretation of data has always been a problem for many working scientists who are not properly trained in statistical reasoning and pattern recognition. I shall give an example, with hopefully some close to realistic numbers. Suppose that the prevalence rate of HIV among a population is 0.01%, an HIV test procedure is 99.8% sensitive (i.e., 99.8% times it is right on those people whom we know have been inflicted with HIV), and the false positive rate is 0.01% (i.e., those which are identified to have HIV where as in actuality they didn't have it.). Then the probability that a person really has HIV if the test says that the person has HIV is actually almost 50%. I.e., you can flip a coin and decide if that person has HIV or not! However, don't get scared, because, the probability that a person does not have HIV if the test indicates no HIV is 99.999%, i.e., you are almost certain that that person does not have HIV.
That is a good demonstration of Bayesian analysis. In practice, clinicians screen for HIV with an ELISA test, which is sensitive but not that specific. If the ELISA is positive, the diagnosis is confirmed with a more specific Western Blot test.

Quote from: joofa
The tragic incident of Sally Clark in England is an example when she was thought of murdering her children which the Royal Statistical Society identified as a misuse of statistics in court procedures.

Many of the signal processing algorithms are special cases of Bayesian statistics. For e.g., our beloved Lucy-Richardson approach for image sharpening. However, many signal processing algorithms make simplifying assumptions in many procedures that make the interpretation of data different.
An excellent example of the misuse of statistics. One of my professors used to warn us, "Figures do not lie, but liars can figure". Fortunately, the death penalty was not imposed on Ms. Clark. Human perception is not that reliable, either in criminology or in evaluation of photographic images. In both fields, objective measurements are needed. In the USA we have numerous examples of persons on death row for murder after a positive eye witness identification, but who have been proven innocent by DNA analysis. Although the consequences of faulty perception are not as grave when one is evaluating dynamic range of photographs as with the identification of murder suspects, an objective scientific measurement is desirable.
« Last Edit: March 11, 2010, 12:12:10 pm by bjanes »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Big Sensors versus Small Sensors
« Reply #23 on: March 11, 2010, 08:51:59 pm »

Quote from: cunim
Dynamic range is complex because it is so dependent upon the detection situation.  For example, at high light throughput levels lens flare becomes limiting.  At low levels, the detector read noise is limiting for short exposures and shot noise for long exposures. This all interacts with wavelength, the angle at which rays strike the detector, detector surface treatments.......  Given all this, just how are we to set up the situation to yield a meaningful (replicable and consistent) DR value?  

Engineers use a definition that removes uncontrolled variables as much as possible.  It allows them to compare detectors and entire optical trains with some degree of objectivity.  Note I say only some degree, because at high bit densities we are looking at very minute differences that tend to fall within the noise floor of the measurement technology.  There will be some uncertainty.

Point is, don't expect an engineering DR figure to have much to do with your photography.  The DR number reflects rigorously defined measurement protocols and specifically excludes subjective factors such as perceived image quality.  

The DR figure is very useful with well defined applications.  For example, if you needed an imaging system for low light you would know to select a detector with high QE,  slow readout for read noise and cooling for shot noise.  Chip packages are well specified on these parameters so you could use DR values to make a direct comparison.  Similarly, the lens should have low internal fluorescence/reflectance and various other suitable characterisitcs.  Again, these data are available and scientists select optical systems on that basis every day.

The problem is that we do not have a clear definition of what we need for photography - nor do we have an accepted measurement protocol for DR.  Therefore, it is not entirely reasonable to expect to be able to look up a CCD or CMOS data set that correlates well with perceived image qualtiy.  We are left to make subjective decisions and tend to depend on reviews from trusted sources.  We are, therefore, doomed to the endless arguments.


Speaking personally, I've never had a problem in determining subjectively whether one camera produces a higher DR than another. It was always apparent to me that images from negative film had a higher DR than images from slide film. Likewise, it was very apparent that my first P&S camera (the Sony T1) had worse DR than my Canon D60 DSLR, and that my second DSLR (the 20D) had much better DR than my D60 above base ISO, but not much difference at base ISO.

It was also apparent that my first full frame DSLR, the 5D, did not have the expected increase in DR compared with the 20D. In fact it seemed worse. The deepest shadows displayed ugly banding. I returned the unit for a replacement which I considered better but still not entirely satisfactory. The chief advantages of the 5D were the flow-on effects of the larger sensor with substantially greater pixel count.

At same image or print size as the 20D there seemed to be better color, lower noise, better tonal range and of course higher resolution at big print sizes and as a consequence better DR at such print sizes.

Your point about the engineering specification for DR having little to do with the perception of DR in the photograph seems only partly true to me. The question that should be asked is; is there any reason why such so-called engineering specifications (as in DXO figures) are not valid for the purpose of comparison? I mean, we're not talking about the sensor manufacturer's engineering specification for the sensor itself, unattached to a camera.

The remarkable thing about DXO results for the D3X is that it is claimed the D3X has 1 & 1/3rd stops higher DR than the P65+ at the pixel level. The D3X pixel is exactly the same size as the P65+ pixel, yet Nikon have employed such advanced technology that their pixel, despite its probably having the disadvantage of a smaller photon-collecting diode and therefore able to collect less light, actually has a higher DR.

The actual figure that DXO specify, 12.84 EV, might be unrealistically high from the perspective of the photographer and the viewer. The image detail and quality in that 13th stop might be be just awful and totally useless, and would therefore normally be clipped to black during processing, except in artistic shots like this which attempts to turn the ugly banding of the 5D into beauty; an accident when the flash did not fire.  

[attachment=20834:Temple_b..._Ayudhya.jpg]

However, for the purposes of comparison, one would examine the degree of awfulness in that 13th stop. According to DXO, the P65+ image (of identical scene and lighting of course) would be even more awful in the 13th stop than the D3X, and no doubt more awful in the 12th and 11th stop.

The question then becomes, at what stop is the detail and quality useful so that it could be preserved in the print instead of being clipped to black? Perhaps in the 9th stop? Real world comparisons should examine such issues. The fact that the D3X pixel is the same size as the P65+ pixel makes such comparisons very easy. I'm really surprised no-one's taken the trouble to compare the D3X with the P65+, at the pixel level, to either confirm or refute the DXO claims.

Because the pixel size is the same, all one has to do is use the same focal length of lens on both cameras, shoot the same 'high SBR' scene with the same lighting, from the same position, and then crop the P65+ image to the same FOV as the D3X image (and same aspect ratio). Both images will then have the same file size and be comprisied of the same number of pixels. DR comparison would be easy, provided the exposures are correct with regard to ETTR. What could be easier! There even no need to adjust f stop for equal DoF, always a contentious issue.

Okay! Okay! Lens flare. I must confess that I didn't realise that lens flare could be such a limiting factor on DR. We've all experienced the annoying effects of lens flare when the camera angle is too close to the direct rays of the sun, but the fact that lens flare may reduce DR when the sun isn't even in sight, should be a factor taken into consideration when comparing the DR of different format cameras that use different lenses.

In the light of such information provided in the other current thread on this issue (Dynamice Range and DXO), it would seem that any thorough comparison between the D3X and P65+ should first examine the flare characteristics of the lenses used with both cameras.

I've long been an advocate of specific lens testing by the manufacturer (or contractor) of each lens sold, because we all know that lens quality variability amongst copies of the same model of lens is an issue. I would now add a further requirement for a 'flare test' of such individual lens copies.
Logged

John R Smith

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1357
  • Still crazy, after all these years
Big Sensors versus Small Sensors
« Reply #24 on: March 12, 2010, 04:54:46 am »

Quote from: Ray
Okay! Okay! Lens flare. I must confess that I didn't realise that lens flare could be such a limiting factor on DR. We've all experienced the annoying effects of lens flare when the camera angle is too close to the direct rays of the sun, but the fact that lens flare may reduce DR when the sun isn't even in sight, should be a factor taken into consideration when comparing the DR of different format cameras that use different lenses.

In the light of such information provided in the other current thread on this issue (Dynamice Range and DXO), it would seem that any thorough comparison between the D3X and P65+ should first examine the flare characteristics of the lenses used with both cameras.

I've long been an advocate of specific lens testing by the manufacturer (or contractor) of each lens sold, because we all know that lens quality variability amongst copies of the same model of lens is an issue. I would now add a further requirement for a 'flare test' of such individual lens copies.

I would reckon that lens flare might have a much bigger impact than you might think. I did some tests (on film) a couple of years ago using two apparently identical Zeiss 80mm Planars (both silver C lenses ca 1971), off a tripod on a static subject outdoors under diffused overcast conditions (no deep shadows). I was using the 'Blad Pro Hood, so no direct light fell on the lens. One lens was T* coated, the other had the older single-coating. When printed, there was no difference in definition, but a noticeable difference in DR in favour of the T* - about half a stop, I would guess, between highlight and shadows. So I don't really see how you can compare camera sensor DR without using exactly the same lens on both, which is usually impossible.

John
« Last Edit: March 12, 2010, 05:06:03 am by John R Smith »
Logged
Hasselblad 500 C/M, SWC and CFV-39 DB
an

image66

  • Full Member
  • ***
  • Offline Offline
  • Posts: 136
Big Sensors versus Small Sensors
« Reply #25 on: March 12, 2010, 02:21:18 pm »

I believe that lens flare will actually contribute to an effective INCREASE in usable dynamic range of the scene.

A bright sunny day will give us extremes in exposures from highlights to deep shadows. If you expose for the highlights, the shadows will drop below the sensitivity curves of the sensor or film. But with a bit of lens flare, you will raise the brightness level of the shadows to fall into the acceptance range of the sensor or film. (lens flare being internal reflections not resulting in rings, ghosts and other artifacts)

In the darkroom, we use "pre-flashing" to keep highlights from blowing out in B&W prints, and pre-flashing is also a tried and true technique with large-format film to recover details in the shadows.

This is also why older single-coated lenses are usually considered to be superior for B&W photography than the later highly flare-resistant and contrasty newer lenses.

With digital imaging, we have a rather narrow range of lumenance values which can be recorded. There is no toe or shoulder to speak of, but some cameras, like my Olympus E-1 which I performed the tests with, have a distinct toe whereas my old Minolta A1 had a distinct shoulder which can give the illusion of increased dynamic range when in reality the "straight-line sections" of the response/capture curves are similar, if not identical. Because of this narrow range of values which can be recorded, it is our responsibility as photographers to modify the lighting or the incoming scene to the camera to restrict the values to that which can be recorded by the medium.

Lower-contrast lenses are a means to "pre-sensitize" the sensor/film to give the medium an extended toe. If you subscribe to the "expose to the right" exposure method, then your highlights don't change, but the lower-contrast lens will pull up the shadows into a range either above the noise-floor or at least to a point where posterization isn't a problem.

Just as an aside, but some cameras, such as the Olympus E-1 and the Kodak 14n induced random noise on the image data to provide smoother tonal and brightness transitions throughout the range of exposure values. Other cameras provide ultra-clean images but have a non-linear addition of noise in the low values. Unfortunately, this is electronic noise which is visible when boosting the shadows or the ISO. The Kodak/Olympus method means that you'll have noise even in the highest values, but it is consistent throughout the entire range--just like film.

From what I've been seeing, it looks to me like some of the MFDB camera systems also have induced noise on the image data to mask the nasties as well as providing superior micro-contrast.

From the beginning of digital time, I've maintained that CMOS images just didn't look quite right. There was something off which is difficult to explain or describe. As CMOS isn't typically a technology used in the MFDB systems, what we may be seeing is NOT the difference between dynamic range, pixel pitch or even formats, but we're seeing the difference between a CMOS imager and a CCD imager.

It is very difficult to know for sure, as few imager manufacturers will publish specifications on the chips. For all we know, the CMOS images are unable to capture some colors or brightness values and they are being derived. It's hard to say since these are guarded secrets from us photographers. There are obviously differences, though.  When working in Lightroom or your converter of choice, we have to bend the brightness curves to get one camera to match another. This bending of the curves is the stealing bits (dynamic range and micro-contrast) from one part of the curve to reassign to another part of the curve. One thing is a guarantee--no two imager designs has the same response curves and the support electronics in the camera reassign the values to a more consistent form BEFORE the raw file is written.  Raw isn't raw.

Ken Norton
www.zone-10.com
« Last Edit: March 12, 2010, 02:25:01 pm by image66 »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Big Sensors versus Small Sensors
« Reply #26 on: March 12, 2010, 02:36:55 pm »

Hi,

I see your point, but I don't agree. The effect you see is probably for real. It's just that I prefer to have a high contrast flare resistant lens, a sensor that's capable of high DR and tame the "raw image" in development.

The sensor is essentially a linear device. Any toe and shoulder characteristics are essentially added in processing, either in camera processing for JPEG or "raw" processing after the fact.

Best regards
Erik




Quote from: image66
I believe that lens flare will actually contribute to an effective INCREASE in usable dynamic range of the scene.

A bright sunny day will give us extremes in exposures from highlights to deep shadows. If you expose for the highlights, the shadows will drop below the sensitivity curves of the sensor or film. But with a bit of lens flare, you will raise the brightness level of the shadows to fall into the acceptance range of the sensor or film. (lens flare being internal reflections not resulting in rings, ghosts and other artifacts)

In the darkroom, we use "pre-flashing" to keep highlights from blowing out in B&W prints, and pre-flashing is also a tried and true technique with large-format film to recover details in the shadows.

This is also why older single-coated lenses are usually considered to be superior for B&W photography than the later highly flare-resistant and contrasty newer lenses.

With digital imaging, we have a rather narrow range of lumenance values which can be recorded. There is no toe or shoulder to speak of, but some cameras, like my Olympus E-1 which I performed the tests with, have a distinct toe whereas my old Minolta A1 had a distinct shoulder which can give the illusion of increased dynamic range when in reality the "straight-line sections" of the response/capture curves are similar, if not identical. Because of this narrow range of values which can be recorded, it is our responsibility as photographers to modify the lighting or the incoming scene to the camera to restrict the values to that which can be recorded by the medium.

Lower-contrast lenses are a means to "pre-sensitize" the sensor/film to give the medium an extended toe. If you subscribe to the "expose to the right" exposure method, then your highlights don't change, but the lower-contrast lens will pull up the shadows into a range either above the noise-floor or at least to a point where posterization isn't a problem.

Just as an aside, but some cameras, such as the Olympus E-1 and the Kodak 14n induced random noise on the image data to provide smoother tonal and brightness transitions throughout the range of exposure values. Other cameras provide ultra-clean images but have a non-linear addition of noise in the low values. Unfortunately, this is electronic noise which is visible when boosting the shadows or the ISO. The Kodak/Olympus method means that you'll have noise even in the highest values, but it is consistent throughout the entire range--just like film.

From what I've been seeing, it looks to me like some of the MFDB camera systems also have induced noise on the image data to mask the nasties as well as providing superior micro-contrast.

From the beginning of digital time, I've maintained that CMOS images just didn't look quite right. There was something off which is difficult to explain or describe. As CMOS isn't typically a technology used in the MFDB systems, what we may be seeing is NOT the difference between dynamic range, pixel pitch or even formats, but we're seeing the difference between a CMOS imager and a CCD imager.

It is very difficult to know for sure, as few imager manufacturers will publish specifications on the chips. For all we know, the CMOS images are unable to capture some colors or brightness values and they are being derived. It's hard to say since these are guarded secrets from us photographers. There are obviously differences, though.  When working in Lightroom or your converter of choice, we have to bend the brightness curves to get one camera to match another. This bending of the curves is the stealing bits (dynamic range and micro-contrast) from one part of the curve to reassign to another part of the curve. One thing is a guarantee--no two imager designs has the same response curves and the support electronics in the camera reassign the values to a more consistent form BEFORE the raw file is written.  Raw isn't raw.

Ken Norton
www.zone-10.com
Logged
Erik Kaffehr
 

cunim

  • Full Member
  • ***
  • Offline Offline
  • Posts: 130
Big Sensors versus Small Sensors
« Reply #27 on: March 12, 2010, 03:23:17 pm »

[quote name='image66' date='Mar 12 2010, 02:21 PM' post='352945']
I believe that lens flare will actually contribute to an effective INCREASE in usable dynamic range of the scene.

A bright sunny day will give us extremes in exposures from highlights to deep shadows. If you expose for the highlights, the shadows will drop below the sensitivity curves of the sensor or film. But with a bit of lens flare, you will raise the brightness level of the shadows to fall into the acceptance range of the sensor or film. (lens flare being internal reflections not resulting in rings, ghosts and other artifacts)


Ken, I think you provide an excellent example of why working photographers should not obsess about dynamic range.  The flare effect that you describe is a compression of DR, not an expansion.  Never mind.  There are worse things going on in real life imaging.  For example, pixels are not independent of each other.    If you shine a bright light on a pixel to the left of center, the pixels at center will show a rise.  This type of local blooming effect - which you could view as a degradation of local contrast - severely limits DR under some conditions.  There are lots of problems in detector application, and the solutions vary according to what the end user wants.  A portrait photgrapher and a microscopist have completely different needs.  My point is that we can't allow such situational factors to affect what should be a basic performance measurement that allows us to compare devices.  That comparison happens before the subjective part.

It is your eye which tells you a camera is doing what you want.  In  your case that appears to be a sort of integral gain riding and that could certainly expand exposure lattitude.  My point is that, unless photographers can specify exactly what it is they want, they will have a great deal of trouble establishing camera usability from engineering measurements.  Scientific imagers tend to specify.  Creative photographers - not so much.  Instead they end up comparing what they like as much as what different cameras are doing and there is nothing wrong with that.  It is inherently meaningful - just difficult to quantify.

By the way, I do not think there are any secrets or conspiracies in the detector market.  You could probably find the engineering specs for any extant chip package if you look in the right places.
Logged

JeffKohn

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1668
    • http://jeffk-photo.typepad.com
Big Sensors versus Small Sensors
« Reply #28 on: March 12, 2010, 04:16:42 pm »

Quote from: John R Smith
I would reckon that lens flare might have a much bigger impact than you might think. I did some tests (on film) a couple of years ago using two apparently identical Zeiss 80mm Planars (both silver C lenses ca 1971), off a tripod on a static subject outdoors under diffused overcast conditions (no deep shadows). I was using the 'Blad Pro Hood, so no direct light fell on the lens. One lens was T* coated, the other had the older single-coating. When printed, there was no difference in definition, but a noticeable difference in DR in favour of the T* - about half a stop, I would guess, between highlight and shadows. So I don't really see how you can compare camera sensor DR without using exactly the same lens on both, which is usually impossible.

John
I don't doubt any of that. But I still think that to generalize and say that MF lenses have less flare and more DR than DSLR lenses is not true.
Logged
Jeff Kohn
[url=http://ww

PierreVandevenne

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 512
    • http://www.datarescue.com/life
Big Sensors versus Small Sensors
« Reply #29 on: March 12, 2010, 06:53:43 pm »

Quote from: cunim
By the way, I do not think there are any secrets or conspiracies in the detector market.  You could probably find the engineering specs for any extant chip package if you look in the right places.

I'd like to have a pointer to the official (not measured or reverse engineered) specs of the Canon sensors. Any links you are willing to share?
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Big Sensors versus Small Sensors
« Reply #30 on: March 12, 2010, 07:59:51 pm »

Quote from: John R Smith
I would reckon that lens flare might have a much bigger impact than you might think. I did some tests (on film) a couple of years ago using two apparently identical Zeiss 80mm Planars (both silver C lenses ca 1971), off a tripod on a static subject outdoors under diffused overcast conditions (no deep shadows). I was using the 'Blad Pro Hood, so no direct light fell on the lens. One lens was T* coated, the other had the older single-coating. When printed, there was no difference in definition, but a noticeable difference in DR in favour of the T* - about half a stop, I would guess, between highlight and shadows. So I don't really see how you can compare camera sensor DR without using exactly the same lens on both, which is usually impossible.

John


John,
There's an interesting article on the imatest site that tests for veiling flare: -  http://www.imatest.com/docs/veilingglare.html#intro  (thanks to Bill Janes for providing that link on the other thread - Dynamic Range and DXO).

The impression I get is that prime lenses will tend to have less glare than zooms, and that the removal of any filter, such as the usual protective UV filter, may help reduce glare. I always used to buy a UV filter with a new lens and have it permanently attached to protect the lens from scratches, something which amateurs tend to do because their lenses are so precious, like jewels.

However, for the past few years I've adopted the practice of never having a filter attached unless I need one for a particular effect, such as an ND filter or polarizer when photographing rivers and waterfalls. I believe that modern lens coatings are so tough and hard there's really no need at all for a protective filter.

It's possible that DXO results with regard to DR might be influenced slightly by the 'veiling glare' characteristics of the particular lens attached to the camera under test. However, it is very unlikely that DXO would test any camera using a 1971 lens with a single coating. As I understand, DXO either buy or rent the cameras they test, and I imagine they would use the best quality, modern, standard prime with each camera they test.

Towards the end of the imatest article there's a short list of some test results for a handful of lenses which are all zooms, with the exception of one prime, the Canon TS-E 90/2.8 which gets the best score.

It so happens I own a copy of the TS-E 90. I'm tempted to do a DR comparison with my 5D, comparing the TS-E 90 with the Canon 24-105 zoom at 90mm, to see just how significant any DR differences might be. I'd like to do it right now, today, but I can't justify spending the rest of the day on such a project when I have so many other urgent tasks to do, such as assembling dining room chairs which were delivered in cardboard cartons with an allen key and a set of 10 bolts for each chair. It takes a fair amount of time manually screwing 60 bolts with washers, using an allen key. There are 6 chairs and the holes don't seem perfectly aligned.

The reason I'm taking the time to write this is for a bit of mental stimulation.  
Logged

cunim

  • Full Member
  • ***
  • Offline Offline
  • Posts: 130
Big Sensors versus Small Sensors
« Reply #31 on: March 12, 2010, 08:03:36 pm »

Quote from: PierreVandevenne
I'd like to have a pointer to the official (not measured or reverse engineered) specs of the Canon sensors. Any links you are willing to share?
My apologies for being unclear.  I was using "chip package" to specify a detector and support electronics marketed to OEMs.  I had no trouble finding Dalsa's specs for basic things like well capacity, QE, etc. God knows what happens once raw processing is going on in the actual camera/computer, but it is nice to know we are going in there with reasonable specs.

Do we really need chip level specs?  Photographers seem to want things that are at a higher level.  Smooth skies and shadows as opposed to SNR, exposure latitude as opposed to DR, that sort of thing.  The cameras are proprietary systems that deliver data to sastify what the manufacturers think we want.  If you like the pictures, I suppose whatever is going on to massge the pixel data is good.  

Hey, film is grossly nonlinear, has very limited DR, and is subject to variable treatment at every step of processing - but we tend to love what it shows.  Eye of the beholder and all that.

Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Big Sensors versus Small Sensors
« Reply #32 on: March 12, 2010, 08:47:30 pm »

Quote from: image66
I believe that lens flare will actually contribute to an effective INCREASE in usable dynamic range of the scene.
Veiling glare will lighten the shadows of a high DR image and make the image easier to print, but it would be better to use tone mapping to achieve this effect. With truly HDR images veiling glare limits the DR as explained in this PDF from Stanford University. Deconvolution can remove some of the glare at the expense of increasing noise. An occlusion mask is another way to address the problem. The article gives some useful references. John McCann has an excellent 1 hour lecture on the topic on Google Lectures.

Bill
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Big Sensors versus Small Sensors
« Reply #33 on: March 12, 2010, 11:20:21 pm »

Hi,

No there is no really good reason to assume that, except possibly that MF lenses used to be more conservative designs. Not very fast, the zooms are few and have a narrow range. Premium DSLR lenses used to be quite fast.

Best regards
Erik

Quote from: JeffKohn
I don't doubt any of that. But I still think that to generalize and say that MF lenses have less flare and more DR than DSLR lenses is not true.
Logged
Erik Kaffehr
 

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Big Sensors versus Small Sensors
« Reply #34 on: March 13, 2010, 05:05:48 pm »

Quote from: BernardLanguillier
I feel that one key aspect in the perception of DR is the way the raw converters handles local contrast.

We know for a fact that both MFDB and high end DSLR have too much DR for a pleasing rendering without some tweaking. Images look flat when you attempts to map say 11 stops of DR on an 8 bit display.

So you need to work with local contract to generate pleasing results.

We know that the highlight/shadow sliders ot leading raw converters do play with local contrast, to the extend that they sometime generate halos similar to what one gets when using Photomatix.

How much of this is automated when you open a P65+ file in C1 is a topic on which I would be interested in getting more information.

Cheers,
Bernard

Bernard,

You can work with local contrast in various parts of the image to get "pleasing contrast" while still portraying the full DR the sensor and LR/Capture/Photoshop can deliver. The technique of developing the local contrast is important. I'm not too worried about what maps on the display, because even displays in the USD 1500 range don't show the full gamut or resolution of my printer (Epson 3800). The display is well colour-managed so it's reliable for overall colour balance, colour rendition within its gamut and luminosity, but that's about it. The bottom line for me is what comes out of the printer.

If you are using Lightroom, the local contrast enhancement of the dark tones is very well managed between the Fill and Blacks sliders. It's one of the most effective uses of that program. And Recovery is also very good for recovering highlights provided at least one channel has data. So this helps a lot with the high-end DSLRs. For raw files using the IIQ format of a Phase P40+ back, in Capture-1 version 5.1 there are shadow and highlight sliders. They do open-up shadows and tame highlights. I'm not nearly as experienced using this program as I am in using LR, but so far, this is one toolset of Capture-1 which doesn't turn me on very much. I find the effects are not "local" enough - too broadcast, unlike what I can get from LR. Nothing is really "automated" in Capture-1 if you don't want it to be. You can load the files into the program with everything set to zero or neutral and create your own "recipes" for various combinations of presets you want to apply to an image, or batch process.
« Last Edit: March 13, 2010, 05:09:33 pm by Mark D Segal »
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Big Sensors versus Small Sensors
« Reply #35 on: March 16, 2010, 10:04:03 am »

Quote from: Rory
I see your point Ray.  I presume the interpolation of the larger files reduce noise and therefore increase DR.


And in the DxO "Print" condition (8 MP, 300 dpi, 8x12") is way beyond a healthy noise reduction on the P65 60 MP while it does wonders on an FF 12-24MP file. Nevertheless if the P65, D3x, D3s are compared in "Print"condition and at tonal range (so more linear) the P65 shows something of the difference in the graphs that it will give in print. Same for color depth. At lower ISO values of course.

If DxO added an extra "Large Print" condition to the existing one, say 15-20MP, 600 dpi, A2 size, the difference could be more significant.
The "Print" reference is a good idea in itself, the noise reduction on downsampling correct, but the limited print size/quality in favor or FF, APS and 4/3 and too low for MF.


met vriendelijke groeten, Ernst Dinkla

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/


Logged
Pages: 1 [2]   Go Up