Luminous Landscape Forum

Raw & Post Processing, Printing => Digital Image Processing => Topic started by: bwana on June 30, 2013, 12:49:25 am

Title: really understanding clipping
Post by: bwana on June 30, 2013, 12:49:25 am
sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement? after all, how does the software know that the white of a cloud is clipped and it really wasn't that white? we know because the cloud in reality has many tonal variations and when we see a cloud that looks like a white paper cutout, we say the cloud had its whites clipped. but how does software know to put blinkies in the cloud?  or for that matter, how does a camera know to do that in the lcd or its electronic viewfinder? if the camera can 'show' that it's not capturing certain tones, then how is it detecting those tones?

is the algorithm simply looking for consecutive pixels of the exact same tone and assigning the clipping indicator to it? (after all there is no homogeneity in the real world, right?) i guess i am asking how does software define clipping?


undersstanding this would help me parse many of the discussions discussing clipping, such as here:http://forums.adobe.com/message/4923617
here: http://forums.adobe.com/message/4569007
and the 4-5th pages here:http://www.luminous-landscape.com/forum/index.php?topic=79635.60 where the terms 'recovery' and 'remapping' are used to describe how the software is adjusting tonal values to reduce clipping.

i guess somehow the coders have set a baseline for reality in the software as to how things should look? otherwise how could the software 'know' to point out the 'bad data' and to do automatic tonal adjusting during raw conversion?

anyway, who am i to criticize ' under he hood' image manipulation- i would need to do something to fix the clipping and probably could not do it as well.
in general PV2012 does speed up the 'getting of a less objectionable image'. but there are certain situations where acr/lr can trip up.there was another thread here where someone provided a reference to laplacian transforms and stated that they are used in pv2012. in this paper
Fast and Robust Pyramid-based Image Processing
MATHIEU AUBRY, SYLVAIN PARIS, SAMUEL W. HASINOFF, JAN KAUTZ, and FR DO DURAND
test images show where the algorithms can trip up-
Title: Re: really understanding clipping
Post by: Schewe on June 30, 2013, 01:44:40 am
Huh?

So, what is your question? Your post is, uh, but bit disjointed...not at all sure what you are trying to ask.
Title: Re: really understanding clipping
Post by: 32BT on June 30, 2013, 04:45:32 am
Clipping generally means this: that at least one of the channels in R, G, or B has reached its maximum value. Obviously, software doesn't "know" whether that represents a clipped value but in practice that is always the case, especially, like you mention, for larger uniform patches of maximum value. (And additionally in RAW processing the clipping point isn't a hard maximum).

Because clipping usually occurs in just 1 or 2 channels, you'll have the remaining channels and the surrounding pixels available for reconstruction.

So, you can for example blur the image, which gives you the local average color, and then use the non-clipping channels to reconstruct some luminance. 

Now, depending on the size of the clipped patch (the entire cloud, or just some specular reflections), you would need more or less blurring for determining the local color. Laplacian Transforms is another way of saying that you have several sizes of blurring available from which you can select the appropriate sized blur for reconstruction.

Is that what you were asking?
Title: Re: really understanding clipping
Post by: eliedinur on June 30, 2013, 05:41:33 am
Quote
sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement?
No, clipping is a physical phenomenon. It causes a perceived effect in the image, the loss of detail, but that is a result of the clipping, not the clipping itself.
The sensor in a digital camera is made up of millions of discrete photo-sensitive sites, called sensels, that absorb photons and output excited electrons. The relation is linear, if twice the number of photons (i.e. light twice as intense) is received, twice the number of electrons is excited (i.e. the output voltage is doubled). However, you can't go on raising the amount of incoming light ad infinitum; there is a top limit, a saturation point above which even if the light is stronger, the output will not be any different, the output voltage will not be greater. Let's call the voltage produced at saturation V-max and the amount of light needed to saturate the sensel, X. Even if the input is 2X the output is still V-max. Thus it is when the sensel reaches saturation that clipping occurs. V-max is different from all the other possible V values in that it does not correspond to a discrete light intensity, it can be produced by any intensity of light that is at the saturation point or is stronger.

Further in the processing pipeline, the camera's analog to digital converter translates all the fine variations in voltage into the numbers that make up a computer file, each number representing a discrete shade of luminosity. The number of tones that the ACD can write is determined by the bit depth to which it writes Raw data - in most modern DSLRs it is 14 bits, which is 16,384 tones. Note that the number of tones that can be portrayed is also finite, there is a maximum. V-max is represented by the highest number the ACD writes and this number is interpreted by the computer to mean pure white. Detail in a photo is caused by variations in tone, but where the image is pure white there is no variation and no detail. Further on, when the Raw data is used to construct a jpg image - which is only written to 8 bits and therefore can portray only 256 tones - that highest number becomes translated to 255, which is white in 8 bits.

Quote
how does the software know that the white of a cloud is clipped and it really wasn't that white? ... how does software know to put blinkies in the cloud?  or for that matter, how does a camera know to do that in the lcd or its electronic viewfinder?
That is easy. The software or camera firmware simply scans through all the numbers that make up the image and where it finds 255 it puts the blinky. Or when it plots a histogram, because the horizontal scale of the histogram represents tones from 0 to 255, all the pixels that are at 255 are indicated as being up against the right margin.
Title: Re: really understanding clipping
Post by: Jack Hogan on June 30, 2013, 06:09:52 am
sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement? after all, how does the software know that the white of a cloud is clipped and it really wasn't that white?

Just a quick simplified intro.  Every capture and display device can only work on a limited range of tones, limited by its physcal characteristics, what we call its dynamic range.  Any scene information brighter than the upper end of the given dynamic range cannot be recorded and/or displayed.  We say that the relative highlights are blown or clipped. 

Clipped/blown tones are typically recorded at the upper end of a device's data range (e.g. 255 at 8 bits).  Most clipping indicators would show these values as 'clipped'.  A value of 255 would typically be displayed as white, say, by a monitor displaying a monochrome capture.  Attempts at recovery in post will only succeed in reducing this value, bringing it closer to gray, say 240, and most clipping indicators would show this value as 'not clipped'.  But nothing has changed in terms of information available above 240, so what will be displayed is a darker shade of 'clipped'.

When color and standard color spaces are introduced, things get a little more complicated.  We now no longer have a single value for each pixel, but we have three, and in order for an image to be rendered many more linear and non-linear transformations are applied to each of the three channels.  The result is that it is more likely that image information that was not blown during the capture process in the Raw data (say at value 240 at 8 bits), ends up clipped once rendered (at value 255). Most software will show this as clipped. But in this case, since the original information is present in the Raw data, re-massaging the transformation by recovering the highlights of the rendered image will indeed result in 'recovered' image highlight detail: it was there all along but the processing had pushed it out of bounds.

Since most clipping indicators/histograms display data relative to the rendered image only, the hapless photographer is left to guess whether the original information was blown in the Raw data irrecoverably, or whether it wasn't and therefore it is recoverable.  A typical example is a reddish flower in a green garden.  A typical colorimetric rendering will often show the red channel clipping with the green comfortably not, while in fact in the Raw data the red channel is recorded at lower values than the green, with full detail available for both.

Cheers,
Jack
Title: Re: really understanding clipping
Post by: bwana on June 30, 2013, 09:30:13 am
Clipping generally means this: that at least one of the channels in R, G, or B has reached its maximum value. Obviously, software doesn't "know" whether that represents a clipped value but in practice that is always the case, especially, like you mention, for larger uniform patches of maximum value. (And additionally in RAW processing the clipping point isn't a hard maximum).

Because clipping usually occurs in just 1 or 2 channels, you'll have the remaining channels and the surrounding pixels available for reconstruction.

So, you can for example blur the image, which gives you the local average color, and then use the non-clipping channels to reconstruct some luminance.  

Now, depending on the size of the clipped patch (the entire cloud, or just some specular reflections), you would need more or less blurring for determining the local color. Laplacian Transforms is another way of saying that you have several sizes of blurring available from which you can select the appropriate sized blur for reconstruction.

Is that what you were asking?

YES. Thank you.

When color and standard color spaces are introduced, things get a little more complicated.  We now no longer have a single value for each pixel, but we have three, and in order for an image to be rendered many more linear and non-linear transformations are applied to each of the three channels.  The result is that it is more likely that image information that was not blown during the capture process in the Raw data (say at value 240 at 8 bits), ends up clipped once rendered (at value 255). Most software will show this as clipped. But in this case, since the original information is present in the Raw data, re-massaging the transformation by recovering the highlights of the rendered image will indeed result in 'recovered' image highlight detail: it was there all along but the processing had pushed it out of bounds.


yes! thank you. so Bayer reconstruction clips pixel values that may have one or two components that are not clipped. This is when the little clipping triangle takes on a color-to show that only one or two particular channels are clipping.

But sometimes the clipping triangle is white and the histogram appears chopped off at 255. I interpret this case to mean that all three channels are clipped in the raw.you suspect there is more information that is lost (blown highlights). But miraculously, you can drag the slider (exposure or whites) to the left and get back 'information that was lost'. How is it possible 'recover' highlights from the raw file if all three channels are clipped? More information seems to magically appear at the right side of the histogram. Does Bayer reconstruction(and therefore the displayed histogram) really throw out information?

ASIDE:
Does this also happen to a  monochrome sensor (Leica M) which is not bound by Bayer interpolation? Also in that (Leica M) case, there is no interpolation to drag pixel values into clipping.
Title: Re: really understanding clipping
Post by: eliedinur on June 30, 2013, 09:58:45 am
A far more significant cause of "false" clipping is the application of White Balance during the processing from Raw to RGB image. This is done by multiplying all the values in the red and blue channels and these increases are often pretty large. A typical daylight WB will more than double red values while increasing the blue channel by around x1.4. Similarly, a tungsten light WB will double the blue values. Thus, it can easily happen that although these channels are not clipped in the Raw capture the WB can cause apparent clipping that can be removed by reducing luminosity globally, i.e. reducing "exposure" in the Raw converter, or by applying a curve to roll off the higlights.
Title: Re: really understanding clipping
Post by: hjulenissen on June 30, 2013, 10:23:07 am
Any raw value > some threshold is unreliable. It may be clipped, or it may just happen to be right. The only way to be certain is to decrease the exposure slightly, and see if all values now are below that threshold.

I think that clipping single sensels is usually non-problematic, and it allows for ETTROR (exposure to the right of right) (tm), allowing less noise in the shadows. What you want to have is probably <N% clipped sensels, or <M sensels that are continously clipped.
yes! thank you. so Bayer reconstruction clips pixel values that may have one or two components that are not clipped. This is when the little clipping triangle takes on a color-to show that only one or two particular channels are clipping.

But sometimes the clipping triangle is white and the histogram appears chopped off at 255. I interpret this case to mean that all three channels are clipped in the raw.you suspect there is more information that is lost (blown highlights). But miraculously, you can drag the slider (exposure or whites) to the left and get back 'information that was lost'. How is it possible 'recover' highlights from the raw file if all three channels are clipped? More information seems to magically appear at the right side of the histogram. Does Bayer reconstruction(and therefore the displayed histogram) really throw out information?

ASIDE:
Does this also happen to a  monochrome sensor (Leica M) which is not bound by Bayer interpolation? Also in that (Leica M) case, there is no interpolation to drag pixel values into clipping.
I think that Bayer reconstruction/CFA is the wrong place to look for the most significant contributors.

"Color correction" and white-balance can be described as "form each output channel pixel as positive/negative weighted sums of corresponding input channel pixels". "Black-point", "Whitepoint" setting (clipping) is needed before a gamma is applied (shift of midtones). All of this would be done in a Bayer-less Foveon camera (and then some non-linear color processing). A monochrome camera would not have the color stuff, but the blackpoint/whitepoint/gamma is still relevant. Modern cameras might do fancy tonemapping (HDR) in order to make pleasing JPEGs. More complex nonlinearity thrown into the mix.

Point is: the JPEG histogram cannot be trusted if you want to know if the raw sensor channels are clipped. It is far too complex and proprietary to get anything but vague correlates of what we really want to know.

-h
Title: Re: really understanding clipping
Post by: bjanes on June 30, 2013, 12:00:52 pm
Any raw value > some threshold is unreliable. It may be clipped, or it may just happen to be right. The only way to be certain is to decrease the exposure slightly, and see if all values now are below that threshold.

I think that clipping single sensels is usually non-problematic, and it allows for ETTROR (exposure to the right of right) (tm), allowing less noise in the shadows. What you want to have is probably <N% clipped sensels, or <M sensels that are continously clipped. I think that Bayer reconstruction/CFA is the wrong place to look for the most significant contributors.

"Color correction" and white-balance can be described as "form each output channel pixel as positive/negative weighted sums of corresponding input channel pixels". "Black-point", "Whitepoint" setting (clipping) is needed before a gamma is applied (shift of midtones). All of this would be done in a Bayer-less Foveon camera (and then some non-linear color processing). A monochrome camera would not have the color stuff, but the blackpoint/whitepoint/gamma is still relevant. Modern cameras might do fancy tonemapping (HDR) in order to make pleasing JPEGs. More complex nonlinearity thrown into the mix.

Point is: the JPEG histogram cannot be trusted if you want to know if the raw sensor channels are clipped. It is far too complex and proprietary to get anything but vague correlates of what we really want to know.

I think this response in unnecessarily complicated and pessimistic. The more sophisticated cameras have two types of histograms: Luminance and RGB individual channel (for technical details see the Cambridge in Color Tutorial (http://www.cambridgeincolour.com/tutorials/histograms2.htm)). The luminance histogram keeps track of each pixel location and is weighted according to the sensitivity of human vision to each color, with the green overrepresented and the blue much underrepresented. If we merely want to know what channels are clipped, the RGB channel histograms show the distribution of pixel values for each separate channel and are what we should be looking at to detect channel clipping. Unfortunately, these RGB values are represented after white balance and may show clipping in the red or blue channels after white balance (the blue and red WB multipliers are greater than 1) when no clipping is present in the raw channel prior to WB. One may avoid this complication by loading UniWB values for white balance into the camera; UniWb gets its name from the fact that the WB multipliers are all 1.0.

One may still have saturation clipping since Adobe RGB is the widest color space that most cameras offer and the camera sensor sensitivities are beyond what can be encoded with aRGB. Thus one may have saturation clipping in the histogram when the raw channel is not actually clipped. Such clipping is frequently observed when one is photographing highly saturated flowers.

Gamma encoding affects the midtones, but does not affect 0 or 255 pixel values, and these are what we need to detect clipping. Unfortunately, most cameras give a somewhat conservative histogram and may show clipping when the raw file values are short of clipping. One may mitigate this false clipping by using low contrast settings in the camera picture control.

If one knows his/her camera and uses these precautions, the RGB histograms do give a reasonable indication of the status of the raw channel values. Raw histograms would be much preferable, but the knowledgeable photographer can work around some of the limitations of current histograms.

Bill 

Title: Re: really understanding clipping
Post by: digitaldog on June 30, 2013, 12:02:51 pm
Remember the movie Spinal Tap, where the character wants to turn the volume control which goes from 0 to 10 to 11? 11 represents clipping in the real world, there's nothing there <g>. 
Title: Re: really understanding clipping
Post by: jrsforums on June 30, 2013, 12:12:14 pm
I think this response in unnecessarily complicated and pessimistic. The more sophisticated cameras have two types of histograms: Luminance and RGB individual channel (for technical details see the Cambridge in Color Tutorial (http://www.cambridgeincolour.com/tutorials/histograms2.htm)). The luminance histogram keeps track of each pixel location and is weighted according to the sensitivity of human vision to each color, with the green overrepresented and the blue much underrepresented. If we merely want to know what channels are clipped, the RGB channel histograms show the distribution of pixel values for each separate channel and are what we should be looking at to detect channel clipping. Unfortunately, these RGB values are represented after white balance and may show clipping in the red or blue channels after white balance (the blue and red WB multipliers are greater than 1) when no clipping is present in the raw channel prior to WB. One may avoid this complication by loading UniWB values for white balance into the camera; UniWb gets its name from the fact that the WB multipliers are all 1.0.

One may still have saturation clipping since Adobe RGB is the widest color space that most cameras offer and the camera sensor sensitivities are beyond what can be encoded with aRGB. Thus one may have saturation clipping in the histogram when the raw channel is not actually clipped. Such clipping is frequently observed when one is photographing highly saturated flowers.

Gamma encoding affects the midtones, but does not affect 0 or 255 pixel values, and these are what we need to detect clipping. Unfortunately, most cameras give a somewhat conservative histogram and may show clipping when the raw file values are short of clipping. One may mitigate this false clipping by using low contrast settings in the camera picture control.

If one knows his/her camera and uses these precautions, the RGB histograms do give a reasonable indication of the status of the raw channel values. Raw histograms would be much preferable, but the knowledgeable photographer can work around some of the limitations of current histograms.

Bill 



Well said, Bill.  Complete without diving deep into numbers.
Title: Re: really understanding clipping
Post by: Guillermo Luijk on June 30, 2013, 06:42:25 pm
is the algorithm simply looking for consecutive pixels of the exact same tone and assigning the clipping indicator to it? (after all there is no homogeneity in the real world, right?) i guess i am asking how does software define clipping?

Very simple algorithm: all pixels reaching the saturation value in the encoding scale (or over a given threshold) are considered clipped. This algorithm will be accurate for 99%* of occurrences, which is more than good enough not to deserve making any extra effort. I'm pretty sure camera and RAW developer clipping warnings work like this.

* A non clipped pixel may reach exactly 255, but these pixels are statistically negligble compared to actually clipped zones (i.e. pixels that would need a >255 value to be correctly encoded).

~~~

Finding clipped pixels is even more important in the RAW world since the saturation value is needed to correctly perform the RAW development, ensuring neutral white clipped highlights after white balancing. This task is very easy since every camera has a defined RAW saturation value.

In some cases it changes depending on the ISO setting, but the point is that a certain saturation value always exist.

My Canon 350D saturates at the end of its 12-bit scale, i.e. at 4095:
(http://www.guillermoluijk.com/tutorial/satlevel/hist350d.gif)

While the capricious Canon 5D saturates at 3692:
(http://www.guillermoluijk.com/tutorial/satlevel/hist5d.gif)

Other cameras (I have seen this in Panasonic/Olympus and Fuji sensors) make a bit more difficult to find RAW clipped pixels because saturation spreads over a range of values following a gaussian distribution, but it's easy to choose a saturation threshold even in these cases.

For the Olympus E-P1 3584 could be a valid choice for RAW clipping:
(http://www.guillermoluijk.com/tutorial/satlevel/olyep1.gif)


Title: Re: really understanding clipping
Post by: RFPhotography on June 30, 2013, 07:22:50 pm
It's interesting that everyone has concentrated on clipping at the upper end of the range. 
Title: Re: really understanding clipping
Post by: Guillermo Luijk on June 30, 2013, 07:27:10 pm
It's interesting that everyone has concentrated on clipping at the upper end of the range.
Clipping in the lower end doesn't exist because of the presence of noise, which follows a gaussian distribution. Shadow clipping is always created by the software (RAW development stage, colour profile conversion, JPEG generation with some deliberated shadow clipping,...).

In fact if you look at my RAW histograms, you'll see that Canon applies a bias to all RAW values so that none of them reaches 0. It's actually the RAW developer that chooses what RAW level is considered 0. Other brands like Nikon substract that RAW offset, cutting the gaussian distribution of read noise in half.

This is a RAW histogram of a shot in absence of light (darkframe) on a Canon 350D:

(http://www.guillermoluijk.com/article/rawnoise/histodark.gif)
Title: Re: really understanding clipping
Post by: bwana on June 30, 2013, 09:25:19 pm
I read this:
http://www.luminous-landscape.com/tutorials/understanding-series/u-raw-files.shtml

A 12 Bit raw File
Within the first F/Stop, which contains the Brightest Tones
2048 levels available
Within the second F/Stop, which contains Bright Tones
1024 levels available
Within the third F/Stop, which contains the Mid-Tones
512 levels available
Within the fourth F/Stop, which contains Dark Tones
256 levels available
Within the fifth F/Stop, which contains the Darkest Tones
128 levels available


An 8 Bit JPG File
Within the first F/Stop, which contains the Brightest Tones
69 levels available
Within the second F/Stop, which contains Bright Tones
50 levels available
Within the third F/Stop, which contains the Mid-Tones
37 levels available
Within the fourth F/Stop, which contains Dark Tones
27 levels available
Within the fifth F/Stop, which contains the Darkest Tones
20 levels available

My interpretation is that the brightest 2048 raw tones are mapped to the brightest 69 jpg tones.2048/69=~30.
If the exposure slider moves to -1, I assume that means -1EV, so the top 2 jpg levels (69+50=119 tones) now contain the the top 2048 raw tones?
IS the brightest jpg tone is an average of the brightest 30 raw tones at EV 0? and at EV -1 is it the average of the brightest 15 raw tones?
When you move the exposure slider to the left, what is the raw converter doing to generate more jpg tones?
Is it spreading those top 30 RAW tones into more than the brightest jpg tones logarithmically, linearly, or some other way that I can fathom? On top of this the tone curve can be manipulated in ACR so that means there is an additional transformation that is represented.

In case anyone claims that I should do the work and figure this out for myself, I have tried.
Attached is a jpg file, its histogram from GIMP (log), its histogram from CS6, and the raw histogram from raw digger.
Why are the histograms of GIMP and CS6 so different?
Does the raw histogram indicate that the raw is a 14 bit file because it has 16000 levels?
Title: Re: really understanding clipping
Post by: bwana on June 30, 2013, 09:27:01 pm
oops, I attached the wrong PS histogram. here is the luminosity one I wanted to attach:
Title: Re: really understanding clipping
Post by: Guillermo Luijk on June 30, 2013, 09:40:27 pm
Why are the histograms of GIMP and CS6 so different?

Because of this:

Attached is a jpg file, its histogram from GIMP (log), its histogram from CS6 (...)

This is the real histogram of that JPEG file (no truncate):

(http://www.guillermoluijk.com/misc/church2_HIS.gif)

and here truncating the Y-axis to make it more easily visible:

(http://www.guillermoluijk.com/misc/church_HIS.gif)
Title: Re: really understanding clipping
Post by: RFPhotography on June 30, 2013, 10:30:03 pm
Clipping in the lower end doesn't exist because of the presence of noise, which follows a gaussian distribution. Shadow clipping is always created by the software (RAW development stage, colour profile conversion, JPEG generation with some deliberated shadow clipping,...).

In fact if you look at my RAW histograms, you'll see that Canon applies a bias to all RAW values so that none of them reaches 0. It's actually the RAW developer that chooses what RAW level is considered 0. Other brands like Nikon substract that RAW offset, cutting the gaussian distribution of read noise in half.

This is a RAW histogram of a shot in absence of light (darkframe) on a Canon 350D:

(http://www.guillermoluijk.com/article/rawnoise/histodark.gif)

Don't really know that that's a valid test.  I would expect a darkframe to show an absence of 0 value pixels.  What happens on the sensor can't be taken in isolation though because, excepting shooting JPEG, we can't use what comes off the sensor without conversion.  The entire chain has to be taken as a whole. 
Title: Re: really understanding clipping
Post by: Guillermo Luijk on June 30, 2013, 10:37:36 pm
I would expect a darkframe to show an absence of 0 value pixels.

In the Canon files there are no 0 values because of the positive bias. In Nikon files read noise is clipped by its mean value, providing 0 values (se patch 22):

(http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/D300patch14bitISO200histos.jpg)

Emil explains it in Fig. 11 (http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p2.html).
Title: Re: really understanding clipping
Post by: Guillermo Luijk on June 30, 2013, 10:43:41 pm
excepting shooting JPEG, we can't use what comes off the sensor without conversion.

Not true: this is a RAW channel of a RAW file, no conversion at all:

(http://www.guillermoluijk.com/misc/hipo.jpg)

Taking the individual RAW channels is like having a monochrome sensor, no demosaicing, no colour profiling.
Title: Re: really understanding clipping
Post by: RFPhotography on July 01, 2013, 06:49:50 am
How does one extract just a channel?  There still has to be some sort of conversion.  Raw formats aren't actual file formats.  Something still has to be done to change the raw data into a visible image.  Given the makeup of the typical sensor, how do you end up with a continuous tone image?  Why does it not have gaps?  That is, if you take just the green pixel information, why are there not gaps where the red and blue pixels would normally be? 
Title: Re: really understanding clipping
Post by: Guillermo Luijk on July 01, 2013, 08:24:45 am
How does one extract just a channel?  There still has to be some sort of conversion.  Raw formats aren't actual file formats.  Something still has to be done to change the raw data into a visible image.  Given the makeup of the typical sensor, how do you end up with a continuous tone image?  Why does it not have gaps?  That is, if you take just the green pixel information, why are there not gaps where the red and blue pixels would normally be?

There are no gaps because you only take the pixels of the desired channel: R, G1, G2 or B. Maybe this illustrates:

(http://www.guillermoluijk.com/article/rawnoise/extraccion2.gif)
Left: Bayer mosaic, Right: just the B channel

There is no conversion, just put the chosen RAW numbers on a bitmap and save it as any desired image format. RAW numbers are visible in a straightforward way, and a RAW histogram can be plotted using those numbers as well, without existing any image.
Title: Re: really understanding clipping
Post by: Vladimirovich on July 01, 2013, 09:14:53 am
Something still has to be done to change the raw data into a visible image. 
the same is true for JPG, etc... something has to be done to change the data inside. jpg file into a visible image... you even have less math to do to display something visible from most of raw files then from .jpg
Title: Re: really understanding clipping
Post by: RFPhotography on July 01, 2013, 10:26:23 am
There are no gaps because you only take the pixels of the desired channel: R, G1, G2 or B. Maybe this illustrates:

(http://www.guillermoluijk.com/article/rawnoise/extraccion2.gif)
Left: Bayer mosaic, Right: just the B channel

There is no conversion, just put the chosen RAW numbers on a bitmap and save it as any desired image format. RAW numbers are visible in a straightforward way, and a RAW histogram can be plotted using those numbers as well, without existing any image.


OK, but you can't use it straight out of the camera.  There is still some intermediary work that has to happen. 

Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera.  A raw file can't.  JPEG is an actual image file format.  Raw files aren't.
Title: Re: really understanding clipping
Post by: Guillermo Luijk on July 01, 2013, 10:30:48 am
There is still some intermediary work that has to happen.

Oh yes, and to view a JPEG file intermediary works has to happen: a software engine must run decompression algorithms (https://en.wikipedia.org/wiki/JPEG) to convert the JPEG file numbers (which are not an image) into a displayable bitmap. In fact a JPEG file stores frequency values rather than spatial luminosity values, which the RAW file consists of, so strictly speaking RAW data are closer to a final image than the JPEG data.
Title: Re: really understanding clipping
Post by: Vladimirovich on July 01, 2013, 11:49:50 am
OK, but you can't use it straight out of the camera.  There is still some intermediary work that has to happen.  

see the reply from GL above...

Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera.  A raw file can't.  JPEG is an actual image file format.  Raw files aren't.

used by what exactly ? by some software that knows how to make 0s & 1s in JPG file to appear on your screen in a form an image that you can understand (what was in that shot)  ;D ... I have some news for you - the same works for raw files as well, you also need some software that knows how to make 0s & 1s in raw file to appear on your screen... there is no difference between "JPG" and "raw" except your wrong perception that JPG has an image and raw does not... the mere fact that your web browser or whatever does not know how to display an image stored in raw does not make that a non image.
Title: Re: really understanding clipping
Post by: xpatUSA on July 01, 2013, 01:56:59 pm
sure, clipping is when your capture/display device does not display the full range of luminosity/color present in reality. but isn't that a uniquely human judgement? after all, how does the software know that the white of a cloud is clipped and it really wasn't that white? we know because the cloud in reality has many tonal variations and when we see a cloud that looks like a white paper cutout, we say the cloud had its whites clipped. but how does software know to put blinkies in the cloud?  or for that matter, how does a camera know to do that in the lcd or its electronic viewfinder? if the camera can 'show' that it's not capturing certain tones, then how is it detecting those tones?

is the algorithm simply looking for consecutive pixels of the exact same tone and assigning the clipping indicator to it? (after all there is no homogeneity in the real world, right?) i guess i am asking how does software define clipping?

Perhaps better understanding can be gained by considering scene capture by the sensor and conversion to a color image as separate subjects? And the use of the word 'clipping' itself can be questionable, IMHO.

The sensor is sometimes said to have a linear gain characteristic (curve) with respect to incident illuminance but we all know that it does not - it has an 'S' shaped curve with a fairly linear portion in the middle. For example, one of my cameras has a sensor well capacity of 77,000 electrons but is stated to have acceptable linearity in the range of 40,000 electrons or so. Thus we see that there is a 'headroom' of some 37,000 electrons but which has a decreasing gain (electrons/lux-sec) as the sensor approaches saturation. For such a camera, any level between 40 and 77 thousand electrons could be chosen for a 'clipping' signal but clipping per se does not occur. Even 77,000 is only an average value for saturation, varying as it does according to the laws of probability and tolerances of sensor manufacture.

Onwards to consideration of conversion of the sensor signals to a color image. Taking an example of flower shots with their highly saturated colors, often shot in bright conditions, most sensors will successfully capture most of the reflected color gamut. However, one finds that many of the captured colors are outside of the gamut covered by the RGB or CMYK color spaces used for image output - monitors, printers, OLED's, etc. Unfortunately for highly saturated images, the process of conversion uses color compression (perceptual) or just plain color clipping (colorimetric) which this time is real clipping of a digital nature. So an on-board JPEG histogram, or a blinkie function, will show clipping when the sensor signals themselves are not clipped. Thus the camera is doing it's internal conversion to JPEG (sRGB or aRGB) and is showing when the conversion is clipping, not when the sensor is saturated. Indeed, this is the basis of 'highlight recovery'.

Even if a RAW image is shot and the file converted into a wide-gamut working space such as Kodak RIMM/ROMM (ProPhoto), or good old Adobe "Melissa", the time comes when the image must be converted into a smaller color gamut which can often be clipping time. For yellow flowers, I find that blues are often reduced to zero and reds increased to 255 when converting from either RAW or ProPhoto to sRGB and thus are truly clipped and not easily retrievable, if at all.



Title: Re: really understanding clipping
Post by: bjanes on July 01, 2013, 05:33:40 pm
Perhaps better understanding can be gained by considering scene capture by the sensor and conversion to a color image as separate subjects? And the use of the word 'clipping' itself can be questionable, IMHO.

The sensor is sometimes said to have a linear gain characteristic (curve) with respect to incident illuminance but we all know that it does not - it has an 'S' shaped curve with a fairly linear portion in the middle. For example, one of my cameras has a sensor well capacity of 77,000 electrons but is stated to have acceptable linearity in the range of 40,000 electrons or so.

Xpat,

You have one strange sensor that gives a sigmoidal characteristic curve. How did you determine this? Most digital sensors are linear and this can be shown by photographing a step wedge and observing the pixel values in Rawdigger. For example, I used the Stouffer T4110 which has density steps of 0.3, corresponding to 1/3 EV. With the current version of Rawdigger one can superimpose a grid over the wedge and take readings and save them for analysis in Excel or some other program.

Here is the wedge with the grid:
(http://bjanes.smugmug.com/Photography/Stouffer-Rawdigger/i-ZPJFRVQ/0/O/RD_Grid.png)

And here are the pixel values.
(http://bjanes.smugmug.com/Photography/Stouffer-Rawdigger/i-ph8npjz/0/O/RD_samples.png)
Note that the green channels are clipped in the two brightest steps as shown by maximal pixel values of 15778 and decreased standard deviations (when the channel is completely clipped the SD is zero, and clipping begins when the right side of the bell shaped curve imposed by shot noise reaches the clipping point of the sensor).

Here is the plot from Excel for the Green1, red, and blue channels. The red and blue are not clipped and are linear within the limits of the wedge and illumination. The green channel is entirely clipped in step 1 and partially clipped in step 2.
(http://bjanes.smugmug.com/Photography/Stouffer-Rawdigger/i-hpFCLTG/0/O/RD_Results.png)

Regards,

Bill
Title: Re: really understanding clipping
Post by: RFPhotography on July 01, 2013, 06:25:17 pm
GL and Vladimirovich, you both know full well what I mean and am saying.  You both are also being purposefully obtuse. I have neither the time nor the patience.
Title: Re: really understanding clipping
Post by: Vladimirovich on July 01, 2013, 07:29:26 pm
GL and Vladimirovich, you both know full well what I mean and am saying. 

yes, we know - you decide (for your own convenience) to define an image only as something that certain programs of your own choice can help you see on your screen as a "beautiful" picture... that's it...
Title: Re: really understanding clipping
Post by: xpatUSA on July 01, 2013, 10:02:42 pm
Xpat,

You have one strange sensor that gives a sigmoidal characteristic curve. How did you determine this? Most digital sensors are linear and this can be shown by photographing a step wedge and observing the pixel values in Rawdigger.

Bjanes (or may I call you Bill?),

Funny you should say "sigmoidal"! The sensor I quoted is in fact the Foveon F7 as used in the Sigma SD9 DSLR, and the numbers came from a paper by Gilblom, et al, "Operation and performance of a color image sensor with layered photodiodes":

Quote
4.3 Performance
The total quantum efficiency of the F7 at 625nm is approximately 49% including the effects of fill factor. Total quantum efficiency is over 45% from about 530nm to beyond 660nm. Testing is underway to establish the limits of wavelength response. The F7 is expected to have useful sensitivity extending from below 300nm to 1000nm or higher. Well capacity is approximately 77,000 electrons per photodiode but the usual operating point (for restricted nonlinearity) corresponds to about 45,000 electrons. Photo response non-uniformity (PRNU) is less than ±1%.

I see that, in my earlier post, I said 40,000 electrons not 45,000 as above. Poor memory, sorry.

I have neither Stouffer wedges nor RawDigger, so I myself have determined nothing. Perhaps I was misled by the somewhat sigmoidal graphs for DR found in places like this http://www.dpreview.com/reviews/sigmasd1/12. They're nowhere near as linear as yours.



Title: Re: really understanding clipping
Post by: Bart_van_der_Wolf on July 02, 2013, 04:27:46 am
I have neither Stouffer wedges nor RawDigger, so I myself have determined nothing. Perhaps I was misled by the somewhat sigmoidal graphs for DR found in places like this http://www.dpreview.com/reviews/sigmasd1/12. They're nowhere near as linear as yours.

Hi Ted,

Those curves are the result of gamma correction and tone-mapping (and with the influence of the lens, veiling glare and such). In a practical sense, the sensors in most cameras are close to perfectly linear in their response to light. This is also the result of the camera electronics/DAC that use a Black-point and White-point on the response curve that produces such a linearity. Since we can only access the data as recorded after the DAC, it's all that matters in a practical sense.

Cheers,
Bart
Title: Re: really understanding clipping
Post by: Jack Hogan on July 02, 2013, 05:08:22 am
the use of the word 'clipping' itself can be questionable, IMHO

Hi Ted,

I like to think of 'Blowing' as applying to photosites and Full Well Count, a function of Exposure - if we saturate the photosites any highlights higher than that are gone forever.

Conversely I like to think of 'Clipping' as possibly also being caused by processing - we've reached the upper limit of the Raw or other data in the processing chain, while perhaps information at the sensor is not 'Blown', so possibly recoverable with different processing.

Jack
Title: Re: really understanding clipping
Post by: hjulenissen on July 02, 2013, 06:22:45 am
Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera.  A raw file can't.  JPEG is an actual image file format.  Raw files aren't.
What is it that you are trying to prove?

A JPEG-file is a file. A raw file is a file. Humans cannot look at files as if they were images. Files have to be decoded, interpreted and rendered in order to make sense as a visual thingy. The JPEG format is notorious for being open for interpretation by the decoder (the Independent Jpeg Group produce a decoder that is the de facto reference for how a given Jpeg file should decode into raw rgb values, simply because the standard can't be easily/reliably be used for that). Then you have to convert something akin to a *.bmp file into colored, space-variant luminance/reflectance that is to appear in front of the viewer.

A raw file developed by the manufacturers own raw developer tends to look visually nearly identical with the jpeg file coming directly from the camera.
...What happens on the sensor can't be taken in isolation though because, excepting shooting JPEG, we can't use what comes off the sensor without conversion.  The entire chain has to be taken as a whole. 
We can do statistics on raw files that can be interpreted in isolation as to predicting how a differently exposed raw file of the same scene would behave. That is clumsy language for saying that raw histograms is all that is needed for adjusting exposure as long as you are shooting raw only.

You have been shown that an image can be formed by sampling one pixel value in every 2x2 raw sensel array. The fact that you seemingly did not know this makes me think that your confident assertions about raw files are poorly founded.

-h
Title: Re: really understanding clipping
Post by: sandymc on July 02, 2013, 08:51:36 am
Bjanes (or may I call you Bill?),

Funny you should say "sigmoidal"! The sensor I quoted is in fact the Foveon F7 as used in the Sigma SD9 DSLR, and the numbers came from a paper by Gilblom, et al, "Operation and performance of a color image sensor with layered photodiodes":

I see that, in my earlier post, I said 40,000 electrons not 45,000 as above. Poor memory, sorry.

I have neither Stouffer wedges nor RawDigger, so I myself have determined nothing. Perhaps I was misled by the somewhat sigmoidal graphs for DR found in places like this http://www.dpreview.com/reviews/sigmasd1/12. They're nowhere near as linear as yours.

In this case, the clip level would be 40,000 - aka, the sensitivity of the A/D converter would be set such that maximum would be at the equivalent of 40,000. E.g., for a 12-bit camera, 4095 = 40,000 electrons. This is done exactly to keep the sensor operating in its linear range.

Sandy
Title: Re: really understanding clipping
Post by: xpatUSA on July 02, 2013, 10:37:33 am
Thanks Gentlemen for setting me straight on linearity as it appears in in files versus how it is at the sensor.

I take it then that camera sensors are different to photodiodes, with respect to saturation? Some Googling seems to indicate that ordinary photodiodes, unlike sensor photodiodes, do have a soft saturation characteristic, for example:

(http://kronometric.org/phot/sensor/diodeSat.gif)

Which I naively assumed would also apply to camera sensors  ???

Quote
the sensitivity of the A/D converter would be set such that maximum would be at the equivalent of [45,000]. E.g., for a 12-bit camera, 4095 = [45,000] electrons. This is done exactly to keep the sensor operating in its linear range.

Sandy,

The early Sigma cameras work a little differently in that respect. The sensor has three analog ouputs which are presented to three A/D converters. Although the converters themselves are 12-bit, the camera firmware outputs higher bit-count numbers (14 or 16?) to the X3F raw file thus giving numbers much greater than 4095 decimal e.g. somewhere around 10,000 for a saturated sensor. But firmware itself also sends 3 metadata tags called 'saturation' (somewhere between 5,500 and 7,000) - one for each of the three channels - presumably for some arcane use by the raw converter and, being Sigma, that use will probably not be obvious. I think they are also used in-camera for the LCD preview image (an sRGB JPEG thumbnail), which can show blinkies if so selected.

Off topic, but the Foveon sensors do also get quite a lot of trimming on-sensor by means of mainboard-generated sensor inputs - presumably due to production variability of the technology in practice.
Title: Re: really understanding clipping
Post by: xpatUSA on July 02, 2013, 11:58:48 am
Vladimirovic, yes software has to 'read' the JPEG but a JPEG can still be used straight out of the camera.  A raw file can't.  JPEG is an actual image file format.  Raw files aren't.

Hi Bob,

The trouble with all-encompassing statements is that they can sometimes be wrong.

Here is an image made from a Sigma X3F raw data file with no interpolation (i.e. no de-mosaicing).

Of course, to appear on our monitors, it has been gamma'd and brightened . . . and yes, it's a JPEG but it looked just the same as a TIFF on my screen. It was produced by DCraw in "document" mode (no interpolation).

(http://kronometric.org/phot/sensor/mcb.jpg)

I do see what you're driving at, but the days are gone when double-clicking on a raw file only worked with the camera manufacturer's RAW converter.

Title: Re: really understanding clipping
Post by: Rand47 on July 02, 2013, 02:10:35 pm
All this discussion about what is an "image file" and what isn't is quite funny.  Ones and zeros my friends, ones and zeros.

Prints are images... the rest are ones and zeros handled by one kind of software or another to "read them, manipulate them, display them" - and as such are volatile and do not exist in the real world.  ;D

Rand
Title: Re: really understanding clipping
Post by: Guillermo Luijk on July 03, 2013, 09:37:51 am
Prints are images... the rest are ones and zeros handled by one kind of software or another to "read them, manipulate them, display them" - and as such are volatile and do not exist in the real world.  ;D

Not only prints are images. Look, you are about too see an image that hast never been printed...


(http://www.guillermoluijk.com/misc/coco.jpg)

voilà!!!
Title: Re: really understanding clipping
Post by: Oldfox on July 03, 2013, 10:38:27 am
touché!
Title: Re: really understanding clipping
Post by: Jim Kasson on July 03, 2013, 11:18:35 am
This topic has caused me to think about something that I’d considered settled. As often happens, this kind of thinking has made me question what I thought I knew, and raised as many issues as it’s settled.

I’ve created a list of file types in order of increasing processing required for rendering, and also in order of decreasing determinism – the types later in the list tend to have a wider variety of acceptable renderings. The two qualities are not perfectly correlated, and I’ve had to apply subjective weightings to, if you will, turn a vector ranking into a scalar one. The distinctions are rough, as there are so many file formats that, once processed, result in images.

A. Raster-arranged files in the color space of the intended output device. Examples are gray gamma 2.2 for a particular monitor, sRGB, one of the SWOP CMYK standards. In the case where an offset press is the output device, the data in the file is halftoned. In the case where an inkjet printer is the output device, the data is also halftoned – this image form is hardly ever saved on a disk except for spooling.

B. Raster-arranged files with tags enabling appropriate processing for a range of output devices. Examples are PSD, many TIFF variants with ICC profiles attached, and raw files. Raw files sometimes need data that’s not in the tags for acceptable processing.

C. Quasi-raster-arranged files with non-raster data included. Examples are the discrete cosine transform coding of the original JPEG, the wavelet coding of JPEG 2000, and, I believe, all of the MPEG video formats.

D. Vector files with or without raster elements. May contain data types that require artful interpretations to rasterize. Examples are Adobe PostScript, Adobe Acrobat, native Adobe Illustrator files.

E. Formatted text files with embedded graphics elements. Examples are Microsoft Word files, TeX files, RTF files. InDesign files could go here or in the category above.

F. Plain text files, or files that do not uniquely specify the ultimate raster image. Examples are TXT files or early HTML. HTML has been heading in the direction of category D of late.

I think most people would agree that category A files are image files. I think most people would agree that category F files are not.
 
Where do you all think the line should be drawn? Or is it enough to agree that, in this context, image file is not a binary term?

Title: Re: really understanding clipping
Post by: 32BT on July 03, 2013, 12:47:59 pm
I’ve created a list of file types in order of increasing processing required for rendering,

Simply make a distinction between: "decoding", and "interpretation".

A: fully decoded and interpreted, ready for output

B: already decoded, needs interpretation for output

C: needs decoding and interpretation for output

RAW files can be considered a form of lossy encoding, and hence would fall in category C. (This would also hold true in your categories imo).

Strictly speaking, all file formats fall into category C, as they are only containers to raster-data, which needs to be extracted to be displayed. But more importantly is the idea of "compositing" as distinct, since PSD, TIFF, PDF, and PostScript can contain data for several images. A composition obviously needs to be interpreted (rendered) before display is possible.

You can obviously make higher levels by including "encoding" steps and "interpretation" tags, that is: a text file can be encoded as postscript with colorprofiles, which can then be decoded and interpreted by a RIP.


Title: Re: really understanding clipping
Post by: xpatUSA on July 03, 2013, 12:58:58 pm
I’ve created a list of file types in order of increasing processing required for rendering, and also in order of decreasing determinism – the types later in the list tend to have a wider variety of acceptable renderings.. . . . .

F. Plain text files, or files that do not uniquely specify the ultimate raster image. Examples are TXT files or early HTML. HTML has been heading in the direction of category D of late.

I think most people would agree that category A files are image files. I think most people would agree that category F files are not.
 
Where do you all think the line should be drawn? Or is it enough to agree that, in this context, image file is not a binary term?

The mind of an older person such as myself flashes back to those "computer-generated" images formed from text-only files that, from a distance, looked like Marilyn Monroe, President Lincoln or even oneself.

So perhaps an 'image file' is simply one that produces a likeness when rendered by whatever means?

That would lead to the conclusion that any file type can be an image file provided that software and equipment exists to render it to our eyes. Thus I could invent a file type ".bit" (assuming it doesn't already exist) that arranged the bits in the bytes, taken sequentially, to form a black and white image when suitably decoded and rendered.

Title: Re: really understanding clipping
Post by: Jim Kasson on July 03, 2013, 01:37:40 pm
The mind of an older person such as myself flashes back to those "computer-generated" images formed from text-only files that, from a distance, looked like Marilyn Monroe, President Lincoln or even oneself.

Ted, I'd call those category A, since they are rasterized and only produce the intended result on a specific (or a narrow set of compatible) output devices. The characters are basically halftoning glyphs.

But the more you dig into this, the messier it gets.

Jim
Title: Re: really understanding clipping
Post by: Jack Hogan on July 03, 2013, 03:49:34 pm
RAW files can be considered a form of lossy encoding

Are you putting them in the same category as other lossy file systems like Jpeg?  That would be misleading.  Raw files of the appropriate form (losslessly or perceptively coded) with appropriate metadata represent image information entropy: no other file system is more efficient at storing full information captured.

Jack
Title: Re: really understanding clipping
Post by: 32BT on July 03, 2013, 04:24:41 pm
Are you putting them in the same category as other lossy file systems like Jpeg?  That would be misleading.  Raw files of the appropriate form (losslessly or perceptively coded) with appropriate metadata represent image information entropy: no other file system is more efficient at storing full information captured.

I was referring to RAW bayer data, which inherently lacks information.

Clearly, Monochrome and Sigma type captures could potentially be considered complete and might only require basic color interpretation.
Title: Re: really understanding clipping
Post by: Vladimirovich on July 03, 2013, 05:08:46 pm
I was referring to RAW bayer data, which inherently lacks information.

Clearly, Monochrome and Sigma type captures could potentially be considered complete and might only require basic color interpretation.

color interpretation is not necessary for something to be considered an image... a "beautiful" image may be.
Title: Re: really understanding clipping
Post by: 32BT on July 03, 2013, 05:20:03 pm
color interpretation is not necessary for something to be considered an image... a "beautiful" image may be.

I understand what your saying, and that is exactly what the word "might" means in english.

But you should also consider the act of displaying image data straight to a display as a color interpretation. That is: the primaries of the image data are then assumed to equal display primaries (or B&W response). Whether that is actually true or intentional is completely irrelevant for the purpose of categorisation.
Title: Re: really understanding clipping
Post by: Rand47 on July 03, 2013, 08:06:00 pm
I yield to your expertise, and the cute pooch!   ;D

Rand
Title: Re: really understanding clipping
Post by: hjulenissen on July 04, 2013, 04:10:02 am
I was referring to RAW bayer data, which inherently lacks information.

Clearly, Monochrome and Sigma type captures could potentially be considered complete and might only require basic color interpretation.
There are upper bounds on the amount of information that can be present in a finite size file. I am no expert on the entropy of the physical world, but I would guess that any scene can be expected to contain "virtually infinite" amount of information, at least in the practically way that the power of an elephant is seemingly infinite compared to that of a mouse.

Any camera does some serious information reduction. Fine spatial information is bundled into "pixels", after being smeared by lens and filters. Fine spectral information is bundled into "r"/"g"/"b" (or some poor substitue in the case of Foveon/achromatics). Intensity information is partly buried in read noise or clipped, then discretized into a finite number of ADC codes. The 3D scene is rendered into a 2D representation where occluded objects are forever lost (except in wishful Hollywood movies). Information about true movement is either smeared into a blur, or never even recorded because the photographer hit the button a bit late.

I don't really see what this has to do with the previous claims about it not being possible/sensible to base histograms on raw files because raw files are not really files (now, that is a contradicting statement). Whatever terminology we choose to use, raw (or raw-like) information does tell us a lot about the relationship between the current scene, camera settings and camera sensor limitations that we can use to select better camera settings. Is not that the core issue?

-h
Title: Re: Re: Re: really understanding clipping
Post by: Guillermo Luijk on July 04, 2013, 05:32:24 am
The 3D scene is rendered into a 2D representation where occluded objects are forever lost (except in wishful Hollywood movies).

BLADE RUNNER!
Title: Re: Re: Re: really understanding clipping
Post by: hjulenissen on July 04, 2013, 05:41:40 am
BLADE RUNNER!
Blade Runner scene. Gotta love the camera-shutter-like (or is it slide-projector?) noise that accompany all processing steps :-)
http://www.youtube.com/watch?v=qHepKd38pr0

Enemy of the state (rotate):
Now it is DOS-style computer "bleeps".
http://www.youtube.com/watch?v=3EwZQddc3kY

CSI-style enhance:
http://www.nuk3.com/gotgames/1456.jpg
Title: Re: really understanding clipping
Post by: bjanes on July 04, 2013, 07:25:55 am
I don't really see what this has to do with the previous claims about it not being possible/sensible to base histograms on raw files because raw files are not really files (now, that is a contradicting statement). Whatever terminology we choose to use, raw (or raw-like) information does tell us a lot about the relationship between the current scene, camera settings and camera sensor limitations that we can use to select better camera settings. Is not that the core issue?

+1 on this topic concerning the non-file status of raw files--That was a meaningless diversion. Cllipping can be introduced by processing of a raw file, during white balance for example. However, this complication can be eliminated by using WB multipliers of less than unity. By the same token, clilpping can be produced by editing of a JPG file.

Bill
Title: Re: really understanding clipping
Post by: Jack Hogan on July 04, 2013, 05:36:46 pm
+1 on this topic concerning the non-file status of raw files--That was a meaningless diversion. Cllipping can be introduced by processing of a raw file, during white balance for example. However, this complication can be eliminated by using WB multipliers of less than unity. By the same token, clilpping can be produced by editing of a JPG file.

Bill

Yes, and clipping can also be introduced in the Raw file by ISO via analog and/or digital gain.

Jack
Title: Re: really understanding clipping
Post by: fdisilvestro on July 04, 2013, 07:19:48 pm
Cllipping can be introduced by processing of a raw file, during white balance for example. However, this complication can be eliminated by using WB multipliers of less than unity. By the same token, clilpping can be produced by editing of a JPG file.

Bill

Using WB multipliers of less than one is fine unless you have clipping in the raw values. In this case, the resulting values will not be clipped and you might end un with a color cast in those highlights and they will not be considered by recovery algorithms.

Even if you don't get clipped values after WB, you might get clipping because of color space encoding (red flowers anyone?). It is not always easy to know if your clipping is because of overexposure/white balance or out of gamut color. What I think is wrong is to underexpose to compensate for out of gamut issues.

Regards
Title: Re: really understanding clipping
Post by: bjanes on July 04, 2013, 09:06:21 pm
Using WB multipliers of less than one is fine unless you have clipping in the raw values. In this case, the resulting values will not be clipped and you might end un with a color cast in those highlights and they will not be considered by recovery algorithms.

Yes, that is true. One should avoid exposing to the right to the extent of incurring channel clipping. However, many highlights are nearly neutral and recover algorithms that recover to neutral can often avoid a color cast.

Even if you don't get clipped values after WB, you might get clipping because of color space encoding (red flowers anyone?). It is not always easy to know if your clipping is because of overexposure/white balance or out of gamut color. What I think is wrong is to underexpose to compensate for out of gamut issues.

I agree that one should not underexpose to avoid saturation clipping but rather one should render into a wider color space. Unfortunately, most cameras do not allow ProPhotoRGB. As I mentioned earlier, UniWB is useful to avoid clipping to to WB.

Bill
Title: Overexposure vs Out of Gamut
Post by: Jack Hogan on July 05, 2013, 04:31:04 am
It is not always easy to know if your clipping is because of overexposure/white balance or out of gamut color. What I think is wrong is to underexpose to compensate for out of gamut issues.

This is an interesting comment, worth spending some time on imo.  It entails understanding  (http://graphics.stanford.edu/courses/cs178-13/applets/locus.html)and visualizing  (http://www.brucelindbloom.com/WorkingSpaceInfo.html#Viewer)color spaces (camera, colorimetric and human) in 3D, something most people (including myself) haven't done much of.  Can you elaborate Francisco?

Jack
Title: Re: Overexposure vs Out of Gamut
Post by: 32BT on July 05, 2013, 05:22:32 am
This is an interesting comment, worth spending some time on imo.  It entails understanding  (http://graphics.stanford.edu/courses/cs178-13/applets/locus.html)and visualizing  (http://www.brucelindbloom.com/WorkingSpaceInfo.html#Viewer)color spaces (camera, colorimetric and human) in 3D, something most people (including myself) haven't done much of.  Can you elaborate Francisco?

Jack

Totally irrelevant because during the capture stage "overexposure" and "out-of-gamut" is exactly the same thing. This is only relevant in post-processing where the potential damage of incorrect exposure is already done.

Title: Re: Overexposure vs Out of Gamut
Post by: fdisilvestro on July 05, 2013, 07:14:03 am
Totally irrelevant because during the capture stage "overexposure" and "out-of-gamut" is exactly the same thing. This is only relevant in post-processing where the potential damage of incorrect exposure is already done.



This would be true if you consider the camera color space as the working color space, which is usually not known. I am refering to working color spaces such as ProphotoRGB, AdobeRGB etc.

It is possible that without having any clipped raw channel, you get out of gamut colors in a standard working space. This issue increases as you reduce the volume of the color space, such as AdobeRGB or sRGB. Even if you use UniWB you might encounter this issue when checking the histogram or blinkies in the camera, since the largest color space available is AdobeRGB.

Regards
Title: Re: Overexposure vs Out of Gamut
Post by: 32BT on July 05, 2013, 07:35:12 am
It is possible that without having any clipped raw channel, you get out of gamut colors in a standard working space.

Yeah, LDO….! That is the entire point of the whole RAW histogram discussion. What's the use of stating the obvious?

You should also be aware that the output colorspace rendition on the camera is usually a perceptual rendition, not the usual matrix conversion, therefore clipping indication may still be accurate, regardless of colorspace.
Title: Re: Overexposure vs Out of Gamut
Post by: fdisilvestro on July 05, 2013, 08:18:22 am
Yeah, LDO….! That is the entire point of the whole RAW histogram discussion. What's the use of stating the obvious?

I was making a coment to Bill (bjanes) post about sources of clipping when processing raw files. I agree that this does not apply to a pure RAW histogram discussion.

You should also be aware that the output colorspace rendition on the camera is usually a perceptual rendition, not the usual matrix conversion, therefore clipping indication may still be accurate, regardless of colorspace.

I was not aware of this. I'm wondering how do you perform a perceptual conversion to sRGB or AdobeRGB if they are matrix colorspaces?
Title: Re: Overexposure vs Out of Gamut
Post by: bjanes on July 05, 2013, 10:04:48 am
This is an interesting comment, worth spending some time on imo.  It entails understanding  (http://graphics.stanford.edu/courses/cs178-13/applets/locus.html)and visualizing  (http://www.brucelindbloom.com/WorkingSpaceInfo.html#Viewer)color spaces (camera, colorimetric and human) in 3D, something most people (including myself) haven't done much of.  Can you elaborate Francisco?

Here is an example of clipping due to white balance and color space limitations and the effect of exposure. The camera is the Nikon D3 set to AdobeRGB, but using the default picture control with normal contrast. The camera is set to 12 bit NEF. A saturated yellow flower causes clipping of the green and red channels as shown on the RGB histogram(yellow = red + green). The exposure time for 3 shots is shown. Reducing exposure eliminates the green clipping, but the red clipping can not be removed with exposure reduction. The raw files are not clipped. The camera luminance histogram does not show clipping and was not helpful in this case.

(http://bjanes.smugmug.com/Photography/ETTRColor/i-VG7cXHW/1/O/HistogramsComposite.png)

Looking at the 1/200 s exposure with ACR and rendering into ProPhotoRGB shows no saturation clipping. Exposure is - 0.5 EV due to the baseline offset of +0.5 EV that ACR uses for the D3.

(http://bjanes.smugmug.com/Photography/ETTRColor/i-m4csDcH/0/O/11_ACR8_ProPhoto.png)

In AdobeRGB there is considerable clipping of the reds.

(http://bjanes.smugmug.com/Photography/ETTRColor/i-jmWWvNb/0/O/11_ACR8_aRGB.png)

Neg exposure compensation eliminates the clipping, but the image is quite dark.

(http://bjanes.smugmug.com/Photography/ETTRColor/i-c6ZRHGT/0/O/11_ACR8_aRGB_neg2_35.png)

Looking at the image and color spaces with Colorthink demonstrates the underlying principles. The yellow is out of gamut of Adobe RGB with normal exposures resulting in high luminance yellows. The gamut of RGB color spaces decreases with luminance and lowering the luminance can bring the yellow into gamut at a lower luminance.

(http://bjanes.smugmug.com/Photography/ETTRColor/i-84ZnWPF/0/O/Colorthink_11_aRGBb_flat.png)

Unfortunately, the yellow is out of gamut of my Epson printer as shown.  For printing one could edit the yellow to give less saturation or allow clipping to occur as long as the highlight yellow detail is not completely lost. The latter approach works best for me.

Regards,

Bill
Title: Re: really understanding clipping
Post by: Jim Kasson on July 05, 2013, 11:55:36 am
Nice job, Bill. Theory and practice combined in one easy-to-follow set of screen grabs.

Jim
Title: Re: really understanding clipping
Post by: Jack Hogan on July 05, 2013, 12:51:34 pm
Fascinating examples and tools Bill.  Depending on the coordinates of the color and the shape of the color space it looks like in some cases reducing exposure might bring a color into gamut, in others it may not, and in others again one may need to increase exposure in order to bring the color into gamut :)  Two questions:

1) Should we be talking about brightness and/or exposure?
2) If a captured color falls outside of, say, aRGB will it necessarily result in one of the three channels saturating (clipping) when rendered neutrally into aRGB's cube?

Thanks again,
Jack
Title: Re: really understanding clipping
Post by: Jim Kasson on July 05, 2013, 01:07:44 pm
2) If a captured color falls outside of, say, aRGB will it necessarily result in one of the three channels saturating (clipping) when rendered neutrally into aRGB's cube?

Jack,

The short answer is: probably. It depends on the method used to translate the raw values to aRGB. It is likely that the raw processor uses a three-by-three matrix multiply of the demosaiced raw RGB values followed by truncation of values above full scale (255 or 1, or something else entirely, depending on the normalization) and below 0. If that's the case, out-of-gamut values will be clipped, and you are right to be suspicious of any pixel where R=0 or 255, G=0 or 255, or B=0 or 255.

More sophisticated color space conversion algorithms might compress the out-of-gamut colors so that they fall within the aRGB gamut. I don't know of raw processors that use that kind of perceptual rendering, but that doesn't  mean they're not out there.

Jim
Title: Re: really understanding clipping
Post by: Bart_van_der_Wolf on July 05, 2013, 01:27:54 pm
More sophisticated color space conversion algorithms might compress the out-of-gamut colors so that they fall within the aRGB gamut. I don't know of raw processors that use that kind of perceptual rendering, but that doesn't  mean they're not out there.

Hi Jim,

While not exactly compressing OOG colors, RawTherapee allows to scale the Linear gamma data before demosaicing. They warn that it might cause issues further down the line, but it does allow to address some of the highlight (or shadow) and OOG clipping issues. It's up to the user to decide what trade-off, if any, is more important. Power to the user, who can also use it in a dual conversion strategy (by blending normal + OOG conversions, compressed or not).

Cheers,
Bart
Title: Re: really understanding clipping
Post by: Jim Kasson on July 05, 2013, 05:52:54 pm
It's up to the user to decide what trade-off, if any, is more important. Power to the user, who can also use it in a dual conversion strategy (by blending normal + OOG conversions, compressed or not).

Thanks, Bart. I like that philosophy.

Jim
Title: Re: really understanding clipping
Post by: Jack Hogan on July 06, 2013, 03:40:59 pm
Thanks Jim, makes sense.

Jack