Pages: [1] 2 3 4   Go Down

Author Topic: really understanding clipping  (Read 22581 times)

bwana

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 309
really understanding clipping
« on: June 30, 2013, 12:49:25 am »

sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement? after all, how does the software know that the white of a cloud is clipped and it really wasn't that white? we know because the cloud in reality has many tonal variations and when we see a cloud that looks like a white paper cutout, we say the cloud had its whites clipped. but how does software know to put blinkies in the cloud?  or for that matter, how does a camera know to do that in the lcd or its electronic viewfinder? if the camera can 'show' that it's not capturing certain tones, then how is it detecting those tones?

is the algorithm simply looking for consecutive pixels of the exact same tone and assigning the clipping indicator to it? (after all there is no homogeneity in the real world, right?) i guess i am asking how does software define clipping?


undersstanding this would help me parse many of the discussions discussing clipping, such as here:http://forums.adobe.com/message/4923617
here: http://forums.adobe.com/message/4569007
and the 4-5th pages here:http://www.luminous-landscape.com/forum/index.php?topic=79635.60 where the terms 'recovery' and 'remapping' are used to describe how the software is adjusting tonal values to reduce clipping.

i guess somehow the coders have set a baseline for reality in the software as to how things should look? otherwise how could the software 'know' to point out the 'bad data' and to do automatic tonal adjusting during raw conversion?

anyway, who am i to criticize ' under he hood' image manipulation- i would need to do something to fix the clipping and probably could not do it as well.
in general PV2012 does speed up the 'getting of a less objectionable image'. but there are certain situations where acr/lr can trip up.there was another thread here where someone provided a reference to laplacian transforms and stated that they are used in pv2012. in this paper
Fast and Robust Pyramid-based Image Processing
MATHIEU AUBRY, SYLVAIN PARIS, SAMUEL W. HASINOFF, JAN KAUTZ, and FR DO DURAND
test images show where the algorithms can trip up-
Logged

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Re: really understanding clipping
« Reply #1 on: June 30, 2013, 01:44:40 am »

Huh?

So, what is your question? Your post is, uh, but bit disjointed...not at all sure what you are trying to ask.
Logged

32BT

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3095
    • Pictures
Re: really understanding clipping
« Reply #2 on: June 30, 2013, 04:45:32 am »

Clipping generally means this: that at least one of the channels in R, G, or B has reached its maximum value. Obviously, software doesn't "know" whether that represents a clipped value but in practice that is always the case, especially, like you mention, for larger uniform patches of maximum value. (And additionally in RAW processing the clipping point isn't a hard maximum).

Because clipping usually occurs in just 1 or 2 channels, you'll have the remaining channels and the surrounding pixels available for reconstruction.

So, you can for example blur the image, which gives you the local average color, and then use the non-clipping channels to reconstruct some luminance. 

Now, depending on the size of the clipped patch (the entire cloud, or just some specular reflections), you would need more or less blurring for determining the local color. Laplacian Transforms is another way of saying that you have several sizes of blurring available from which you can select the appropriate sized blur for reconstruction.

Is that what you were asking?
Logged
Regards,
~ O ~
If you can stomach it: pictures

eliedinur

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 328
Re: really understanding clipping
« Reply #3 on: June 30, 2013, 05:41:33 am »

Quote
sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement?
No, clipping is a physical phenomenon. It causes a perceived effect in the image, the loss of detail, but that is a result of the clipping, not the clipping itself.
The sensor in a digital camera is made up of millions of discrete photo-sensitive sites, called sensels, that absorb photons and output excited electrons. The relation is linear, if twice the number of photons (i.e. light twice as intense) is received, twice the number of electrons is excited (i.e. the output voltage is doubled). However, you can't go on raising the amount of incoming light ad infinitum; there is a top limit, a saturation point above which even if the light is stronger, the output will not be any different, the output voltage will not be greater. Let's call the voltage produced at saturation V-max and the amount of light needed to saturate the sensel, X. Even if the input is 2X the output is still V-max. Thus it is when the sensel reaches saturation that clipping occurs. V-max is different from all the other possible V values in that it does not correspond to a discrete light intensity, it can be produced by any intensity of light that is at the saturation point or is stronger.

Further in the processing pipeline, the camera's analog to digital converter translates all the fine variations in voltage into the numbers that make up a computer file, each number representing a discrete shade of luminosity. The number of tones that the ACD can write is determined by the bit depth to which it writes Raw data - in most modern DSLRs it is 14 bits, which is 16,384 tones. Note that the number of tones that can be portrayed is also finite, there is a maximum. V-max is represented by the highest number the ACD writes and this number is interpreted by the computer to mean pure white. Detail in a photo is caused by variations in tone, but where the image is pure white there is no variation and no detail. Further on, when the Raw data is used to construct a jpg image - which is only written to 8 bits and therefore can portray only 256 tones - that highest number becomes translated to 255, which is white in 8 bits.

Quote
how does the software know that the white of a cloud is clipped and it really wasn't that white? ... how does software know to put blinkies in the cloud?  or for that matter, how does a camera know to do that in the lcd or its electronic viewfinder?
That is easy. The software or camera firmware simply scans through all the numbers that make up the image and where it finds 255 it puts the blinky. Or when it plots a histogram, because the horizontal scale of the histogram represents tones from 0 to 255, all the pixels that are at 255 are indicated as being up against the right margin.
Logged
Roll over Ed Weston,
Tell Ansel Adams th

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: really understanding clipping
« Reply #4 on: June 30, 2013, 06:09:52 am »

sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement? after all, how does the software know that the white of a cloud is clipped and it really wasn't that white?

Just a quick simplified intro.  Every capture and display device can only work on a limited range of tones, limited by its physcal characteristics, what we call its dynamic range.  Any scene information brighter than the upper end of the given dynamic range cannot be recorded and/or displayed.  We say that the relative highlights are blown or clipped. 

Clipped/blown tones are typically recorded at the upper end of a device's data range (e.g. 255 at 8 bits).  Most clipping indicators would show these values as 'clipped'.  A value of 255 would typically be displayed as white, say, by a monitor displaying a monochrome capture.  Attempts at recovery in post will only succeed in reducing this value, bringing it closer to gray, say 240, and most clipping indicators would show this value as 'not clipped'.  But nothing has changed in terms of information available above 240, so what will be displayed is a darker shade of 'clipped'.

When color and standard color spaces are introduced, things get a little more complicated.  We now no longer have a single value for each pixel, but we have three, and in order for an image to be rendered many more linear and non-linear transformations are applied to each of the three channels.  The result is that it is more likely that image information that was not blown during the capture process in the Raw data (say at value 240 at 8 bits), ends up clipped once rendered (at value 255). Most software will show this as clipped. But in this case, since the original information is present in the Raw data, re-massaging the transformation by recovering the highlights of the rendered image will indeed result in 'recovered' image highlight detail: it was there all along but the processing had pushed it out of bounds.

Since most clipping indicators/histograms display data relative to the rendered image only, the hapless photographer is left to guess whether the original information was blown in the Raw data irrecoverably, or whether it wasn't and therefore it is recoverable.  A typical example is a reddish flower in a green garden.  A typical colorimetric rendering will often show the red channel clipping with the green comfortably not, while in fact in the Raw data the red channel is recorded at lower values than the green, with full detail available for both.

Cheers,
Jack
« Last Edit: June 30, 2013, 06:38:43 am by Jack Hogan »
Logged

bwana

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 309
Re: really understanding clipping
« Reply #5 on: June 30, 2013, 09:30:13 am »

Clipping generally means this: that at least one of the channels in R, G, or B has reached its maximum value. Obviously, software doesn't "know" whether that represents a clipped value but in practice that is always the case, especially, like you mention, for larger uniform patches of maximum value. (And additionally in RAW processing the clipping point isn't a hard maximum).

Because clipping usually occurs in just 1 or 2 channels, you'll have the remaining channels and the surrounding pixels available for reconstruction.

So, you can for example blur the image, which gives you the local average color, and then use the non-clipping channels to reconstruct some luminance.  

Now, depending on the size of the clipped patch (the entire cloud, or just some specular reflections), you would need more or less blurring for determining the local color. Laplacian Transforms is another way of saying that you have several sizes of blurring available from which you can select the appropriate sized blur for reconstruction.

Is that what you were asking?

YES. Thank you.

When color and standard color spaces are introduced, things get a little more complicated.  We now no longer have a single value for each pixel, but we have three, and in order for an image to be rendered many more linear and non-linear transformations are applied to each of the three channels.  The result is that it is more likely that image information that was not blown during the capture process in the Raw data (say at value 240 at 8 bits), ends up clipped once rendered (at value 255). Most software will show this as clipped. But in this case, since the original information is present in the Raw data, re-massaging the transformation by recovering the highlights of the rendered image will indeed result in 'recovered' image highlight detail: it was there all along but the processing had pushed it out of bounds.


yes! thank you. so Bayer reconstruction clips pixel values that may have one or two components that are not clipped. This is when the little clipping triangle takes on a color-to show that only one or two particular channels are clipping.

But sometimes the clipping triangle is white and the histogram appears chopped off at 255. I interpret this case to mean that all three channels are clipped in the raw.you suspect there is more information that is lost (blown highlights). But miraculously, you can drag the slider (exposure or whites) to the left and get back 'information that was lost'. How is it possible 'recover' highlights from the raw file if all three channels are clipped? More information seems to magically appear at the right side of the histogram. Does Bayer reconstruction(and therefore the displayed histogram) really throw out information?

ASIDE:
Does this also happen to a  monochrome sensor (Leica M) which is not bound by Bayer interpolation? Also in that (Leica M) case, there is no interpolation to drag pixel values into clipping.
« Last Edit: June 30, 2013, 09:32:05 am by bwana »
Logged

eliedinur

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 328
Re: really understanding clipping
« Reply #6 on: June 30, 2013, 09:58:45 am »

A far more significant cause of "false" clipping is the application of White Balance during the processing from Raw to RGB image. This is done by multiplying all the values in the red and blue channels and these increases are often pretty large. A typical daylight WB will more than double red values while increasing the blue channel by around x1.4. Similarly, a tungsten light WB will double the blue values. Thus, it can easily happen that although these channels are not clipped in the Raw capture the WB can cause apparent clipping that can be removed by reducing luminosity globally, i.e. reducing "exposure" in the Raw converter, or by applying a curve to roll off the higlights.
« Last Edit: June 30, 2013, 10:02:34 am by elied »
Logged
Roll over Ed Weston,
Tell Ansel Adams th

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: really understanding clipping
« Reply #7 on: June 30, 2013, 10:23:07 am »

Any raw value > some threshold is unreliable. It may be clipped, or it may just happen to be right. The only way to be certain is to decrease the exposure slightly, and see if all values now are below that threshold.

I think that clipping single sensels is usually non-problematic, and it allows for ETTROR (exposure to the right of right) (tm), allowing less noise in the shadows. What you want to have is probably <N% clipped sensels, or <M sensels that are continously clipped.
yes! thank you. so Bayer reconstruction clips pixel values that may have one or two components that are not clipped. This is when the little clipping triangle takes on a color-to show that only one or two particular channels are clipping.

But sometimes the clipping triangle is white and the histogram appears chopped off at 255. I interpret this case to mean that all three channels are clipped in the raw.you suspect there is more information that is lost (blown highlights). But miraculously, you can drag the slider (exposure or whites) to the left and get back 'information that was lost'. How is it possible 'recover' highlights from the raw file if all three channels are clipped? More information seems to magically appear at the right side of the histogram. Does Bayer reconstruction(and therefore the displayed histogram) really throw out information?

ASIDE:
Does this also happen to a  monochrome sensor (Leica M) which is not bound by Bayer interpolation? Also in that (Leica M) case, there is no interpolation to drag pixel values into clipping.
I think that Bayer reconstruction/CFA is the wrong place to look for the most significant contributors.

"Color correction" and white-balance can be described as "form each output channel pixel as positive/negative weighted sums of corresponding input channel pixels". "Black-point", "Whitepoint" setting (clipping) is needed before a gamma is applied (shift of midtones). All of this would be done in a Bayer-less Foveon camera (and then some non-linear color processing). A monochrome camera would not have the color stuff, but the blackpoint/whitepoint/gamma is still relevant. Modern cameras might do fancy tonemapping (HDR) in order to make pleasing JPEGs. More complex nonlinearity thrown into the mix.

Point is: the JPEG histogram cannot be trusted if you want to know if the raw sensor channels are clipped. It is far too complex and proprietary to get anything but vague correlates of what we really want to know.

-h
« Last Edit: June 30, 2013, 10:28:55 am by hjulenissen »
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: really understanding clipping
« Reply #8 on: June 30, 2013, 12:00:52 pm »

Any raw value > some threshold is unreliable. It may be clipped, or it may just happen to be right. The only way to be certain is to decrease the exposure slightly, and see if all values now are below that threshold.

I think that clipping single sensels is usually non-problematic, and it allows for ETTROR (exposure to the right of right) (tm), allowing less noise in the shadows. What you want to have is probably <N% clipped sensels, or <M sensels that are continously clipped. I think that Bayer reconstruction/CFA is the wrong place to look for the most significant contributors.

"Color correction" and white-balance can be described as "form each output channel pixel as positive/negative weighted sums of corresponding input channel pixels". "Black-point", "Whitepoint" setting (clipping) is needed before a gamma is applied (shift of midtones). All of this would be done in a Bayer-less Foveon camera (and then some non-linear color processing). A monochrome camera would not have the color stuff, but the blackpoint/whitepoint/gamma is still relevant. Modern cameras might do fancy tonemapping (HDR) in order to make pleasing JPEGs. More complex nonlinearity thrown into the mix.

Point is: the JPEG histogram cannot be trusted if you want to know if the raw sensor channels are clipped. It is far too complex and proprietary to get anything but vague correlates of what we really want to know.

I think this response in unnecessarily complicated and pessimistic. The more sophisticated cameras have two types of histograms: Luminance and RGB individual channel (for technical details see the Cambridge in Color Tutorial). The luminance histogram keeps track of each pixel location and is weighted according to the sensitivity of human vision to each color, with the green overrepresented and the blue much underrepresented. If we merely want to know what channels are clipped, the RGB channel histograms show the distribution of pixel values for each separate channel and are what we should be looking at to detect channel clipping. Unfortunately, these RGB values are represented after white balance and may show clipping in the red or blue channels after white balance (the blue and red WB multipliers are greater than 1) when no clipping is present in the raw channel prior to WB. One may avoid this complication by loading UniWB values for white balance into the camera; UniWb gets its name from the fact that the WB multipliers are all 1.0.

One may still have saturation clipping since Adobe RGB is the widest color space that most cameras offer and the camera sensor sensitivities are beyond what can be encoded with aRGB. Thus one may have saturation clipping in the histogram when the raw channel is not actually clipped. Such clipping is frequently observed when one is photographing highly saturated flowers.

Gamma encoding affects the midtones, but does not affect 0 or 255 pixel values, and these are what we need to detect clipping. Unfortunately, most cameras give a somewhat conservative histogram and may show clipping when the raw file values are short of clipping. One may mitigate this false clipping by using low contrast settings in the camera picture control.

If one knows his/her camera and uses these precautions, the RGB histograms do give a reasonable indication of the status of the raw channel values. Raw histograms would be much preferable, but the knowledgeable photographer can work around some of the limitations of current histograms.

Bill 

Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20614
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: really understanding clipping
« Reply #9 on: June 30, 2013, 12:02:51 pm »

Remember the movie Spinal Tap, where the character wants to turn the volume control which goes from 0 to 10 to 11? 11 represents clipping in the real world, there's nothing there <g>. 
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

jrsforums

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1288
Re: really understanding clipping
« Reply #10 on: June 30, 2013, 12:12:14 pm »

I think this response in unnecessarily complicated and pessimistic. The more sophisticated cameras have two types of histograms: Luminance and RGB individual channel (for technical details see the Cambridge in Color Tutorial). The luminance histogram keeps track of each pixel location and is weighted according to the sensitivity of human vision to each color, with the green overrepresented and the blue much underrepresented. If we merely want to know what channels are clipped, the RGB channel histograms show the distribution of pixel values for each separate channel and are what we should be looking at to detect channel clipping. Unfortunately, these RGB values are represented after white balance and may show clipping in the red or blue channels after white balance (the blue and red WB multipliers are greater than 1) when no clipping is present in the raw channel prior to WB. One may avoid this complication by loading UniWB values for white balance into the camera; UniWb gets its name from the fact that the WB multipliers are all 1.0.

One may still have saturation clipping since Adobe RGB is the widest color space that most cameras offer and the camera sensor sensitivities are beyond what can be encoded with aRGB. Thus one may have saturation clipping in the histogram when the raw channel is not actually clipped. Such clipping is frequently observed when one is photographing highly saturated flowers.

Gamma encoding affects the midtones, but does not affect 0 or 255 pixel values, and these are what we need to detect clipping. Unfortunately, most cameras give a somewhat conservative histogram and may show clipping when the raw file values are short of clipping. One may mitigate this false clipping by using low contrast settings in the camera picture control.

If one knows his/her camera and uses these precautions, the RGB histograms do give a reasonable indication of the status of the raw channel values. Raw histograms would be much preferable, but the knowledgeable photographer can work around some of the limitations of current histograms.

Bill 



Well said, Bill.  Complete without diving deep into numbers.
Logged
John

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: really understanding clipping
« Reply #11 on: June 30, 2013, 06:42:25 pm »

is the algorithm simply looking for consecutive pixels of the exact same tone and assigning the clipping indicator to it? (after all there is no homogeneity in the real world, right?) i guess i am asking how does software define clipping?

Very simple algorithm: all pixels reaching the saturation value in the encoding scale (or over a given threshold) are considered clipped. This algorithm will be accurate for 99%* of occurrences, which is more than good enough not to deserve making any extra effort. I'm pretty sure camera and RAW developer clipping warnings work like this.

* A non clipped pixel may reach exactly 255, but these pixels are statistically negligble compared to actually clipped zones (i.e. pixels that would need a >255 value to be correctly encoded).

~~~

Finding clipped pixels is even more important in the RAW world since the saturation value is needed to correctly perform the RAW development, ensuring neutral white clipped highlights after white balancing. This task is very easy since every camera has a defined RAW saturation value.

In some cases it changes depending on the ISO setting, but the point is that a certain saturation value always exist.

My Canon 350D saturates at the end of its 12-bit scale, i.e. at 4095:


While the capricious Canon 5D saturates at 3692:


Other cameras (I have seen this in Panasonic/Olympus and Fuji sensors) make a bit more difficult to find RAW clipped pixels because saturation spreads over a range of values following a gaussian distribution, but it's easy to choose a saturation threshold even in these cases.

For the Olympus E-P1 3584 could be a valid choice for RAW clipping:



« Last Edit: June 30, 2013, 07:23:20 pm by Guillermo Luijk »
Logged

RFPhotography

  • Guest
Re: really understanding clipping
« Reply #12 on: June 30, 2013, 07:22:50 pm »

It's interesting that everyone has concentrated on clipping at the upper end of the range. 
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: really understanding clipping
« Reply #13 on: June 30, 2013, 07:27:10 pm »

It's interesting that everyone has concentrated on clipping at the upper end of the range.
Clipping in the lower end doesn't exist because of the presence of noise, which follows a gaussian distribution. Shadow clipping is always created by the software (RAW development stage, colour profile conversion, JPEG generation with some deliberated shadow clipping,...).

In fact if you look at my RAW histograms, you'll see that Canon applies a bias to all RAW values so that none of them reaches 0. It's actually the RAW developer that chooses what RAW level is considered 0. Other brands like Nikon substract that RAW offset, cutting the gaussian distribution of read noise in half.

This is a RAW histogram of a shot in absence of light (darkframe) on a Canon 350D:

« Last Edit: June 30, 2013, 07:30:07 pm by Guillermo Luijk »
Logged

bwana

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 309
Re: really understanding clipping
« Reply #14 on: June 30, 2013, 09:25:19 pm »

I read this:
http://www.luminous-landscape.com/tutorials/understanding-series/u-raw-files.shtml

A 12 Bit raw File
Within the first F/Stop, which contains the Brightest Tones
2048 levels available
Within the second F/Stop, which contains Bright Tones
1024 levels available
Within the third F/Stop, which contains the Mid-Tones
512 levels available
Within the fourth F/Stop, which contains Dark Tones
256 levels available
Within the fifth F/Stop, which contains the Darkest Tones
128 levels available


An 8 Bit JPG File
Within the first F/Stop, which contains the Brightest Tones
69 levels available
Within the second F/Stop, which contains Bright Tones
50 levels available
Within the third F/Stop, which contains the Mid-Tones
37 levels available
Within the fourth F/Stop, which contains Dark Tones
27 levels available
Within the fifth F/Stop, which contains the Darkest Tones
20 levels available

My interpretation is that the brightest 2048 raw tones are mapped to the brightest 69 jpg tones.2048/69=~30.
If the exposure slider moves to -1, I assume that means -1EV, so the top 2 jpg levels (69+50=119 tones) now contain the the top 2048 raw tones?
IS the brightest jpg tone is an average of the brightest 30 raw tones at EV 0? and at EV -1 is it the average of the brightest 15 raw tones?
When you move the exposure slider to the left, what is the raw converter doing to generate more jpg tones?
Is it spreading those top 30 RAW tones into more than the brightest jpg tones logarithmically, linearly, or some other way that I can fathom? On top of this the tone curve can be manipulated in ACR so that means there is an additional transformation that is represented.

In case anyone claims that I should do the work and figure this out for myself, I have tried.
Attached is a jpg file, its histogram from GIMP (log), its histogram from CS6, and the raw histogram from raw digger.
Why are the histograms of GIMP and CS6 so different?
Does the raw histogram indicate that the raw is a 14 bit file because it has 16000 levels?
Logged

bwana

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 309
Re: really understanding clipping
« Reply #15 on: June 30, 2013, 09:27:01 pm »

oops, I attached the wrong PS histogram. here is the luminosity one I wanted to attach:
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: really understanding clipping
« Reply #16 on: June 30, 2013, 09:40:27 pm »

Why are the histograms of GIMP and CS6 so different?

Because of this:

Attached is a jpg file, its histogram from GIMP (log), its histogram from CS6 (...)

This is the real histogram of that JPEG file (no truncate):



and here truncating the Y-axis to make it more easily visible:

RFPhotography

  • Guest
Re: really understanding clipping
« Reply #17 on: June 30, 2013, 10:30:03 pm »

Clipping in the lower end doesn't exist because of the presence of noise, which follows a gaussian distribution. Shadow clipping is always created by the software (RAW development stage, colour profile conversion, JPEG generation with some deliberated shadow clipping,...).

In fact if you look at my RAW histograms, you'll see that Canon applies a bias to all RAW values so that none of them reaches 0. It's actually the RAW developer that chooses what RAW level is considered 0. Other brands like Nikon substract that RAW offset, cutting the gaussian distribution of read noise in half.

This is a RAW histogram of a shot in absence of light (darkframe) on a Canon 350D:



Don't really know that that's a valid test.  I would expect a darkframe to show an absence of 0 value pixels.  What happens on the sensor can't be taken in isolation though because, excepting shooting JPEG, we can't use what comes off the sensor without conversion.  The entire chain has to be taken as a whole. 
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: really understanding clipping
« Reply #18 on: June 30, 2013, 10:37:36 pm »

I would expect a darkframe to show an absence of 0 value pixels.

In the Canon files there are no 0 values because of the positive bias. In Nikon files read noise is clipped by its mean value, providing 0 values (se patch 22):



Emil explains it in Fig. 11.

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: really understanding clipping
« Reply #19 on: June 30, 2013, 10:43:41 pm »

excepting shooting JPEG, we can't use what comes off the sensor without conversion.

Not true: this is a RAW channel of a RAW file, no conversion at all:



Taking the individual RAW channels is like having a monochrome sensor, no demosaicing, no colour profiling.
Pages: [1] 2 3 4   Go Up