Luminous Landscape Forum

Raw & Post Processing, Printing => Digital Image Processing => Topic started by: jrsforums on February 15, 2013, 10:59:22 am

Title: Dynamic Range vs bit depth
Post by: jrsforums on February 15, 2013, 10:59:22 am
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on February 15, 2013, 11:17:04 am
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.
I shall let the sensor experts comment on that one.

Quote
An 8 bit jpeg, no more than 8 stops?
First, a jpeg is always gamma-processed, meaning that it is comparable with something like 12-13 bits linear.

Second, there are no limits to what kind of processing can be done to a jpeg. A bracketed exposure set covering any number of stops can be tonemapped and distributed as an 8-bit jpeg. It would still contain information about a large scene dynamic range.

-h
Title: Re: Dynamic Range vs bit depth
Post by: ErikKaffehr on February 15, 2013, 12:14:09 pm
Hi,

Normally the signal from the sensor is linearly coded. In that case a bit is needed for each EV of DR.

It is possible to use non linear coding. Leica does it and perhaps some other compressed formats. So you can cram any dynamic range in an encoded format, but an 8 bit coding will still only hold 256 values per channel.

Best regards
Erik

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 12:19:04 pm
Quote
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

Only if you can relate this to USEABLE detail captured within a scene's dynamic range and so far no has SHOWN a direct relationship which makes discussions on this topic similar to using Einstein's theory of relativity to physically prove we can time travel. The math makes sense, but the energy required to do it makes it impossible without the person becoming vaporized.

It's a neat story, though, just like this one.
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on February 15, 2013, 12:23:41 pm
I think that the take-away point is that for most cameras, "noise" seems to dominate "posterization". It is tempting to use this to conclude that cameras tend to have a sufficient number of bits.

Not that the ADC may very well generate noise internally before it comes around to actually deciding on a discrete code. And the distinction between "sensel", "analog amplification" and "ADC" may be blurry.

-h
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 12:37:51 pm
I think that the take-away point is that for most cameras, "noise" seems to dominate "posterization". It is tempting to use this to conclude that cameras tend to have a sufficient number of bits.

Not that the ADC may very well generate noise internally before it comes around to actually deciding on a discrete code. And the distinction between "sensel", "analog amplification" and "ADC" may be blurry.

-h

In short it's impossible to prove due to a lack of distinction.
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 15, 2013, 01:15:45 pm
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John

Hi John

Per definition DR is Max_signal/noise floor. Well ... this is the so called "engineering DR", DxO uses a slightly different definition Max_signal/Level where SNR=1.

Keep in mind that noise is stochastic value and so it can be a fraction of the used unit. If noise could be zero DR could be infinite.... In our case (digital photo) there is always some "read noise" but even if in a magic way a manufacturer can eliminate it at zero, in a n-bit encoded raw there will exist the quantization noise which is the stdev of a uniform distribution http://en.wikipedia.org/wiki/Quantization_error and this equals to 0.29. So the max (engineering) DR that a n-bit file can hold is n+1.8 ..

http://forum.dxomark.com/index.php/topic,198.0.html
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on February 15, 2013, 01:25:14 pm
Hi John

Per definition DR is Max_signal/noise floor. Well ... this is the so called "engineering DR", DxO uses a slightly different definition Max_signal/Level where SNR=1.

Keep in mind that noise is stochastic value and so it can be a fraction of the used unit. If noise could be zero DR could be infinite.... In our case (digital photo) there is always some "read noise" but even if in a magic way a manufacturer can eliminate it at zero, in a n-bit encoded raw there will exist the quantization noise which is the stdev of a uniform distribution http://en.wikipedia.org/wiki/Quantization_error and this equals to 0.29. So the max (engineering) DR that a n-bit file can hold is n+1.8 ..

http://forum.dxomark.com/index.php/topic,198.0.html
The quantization error is signal dependent; the uniform distribution is only an engineering simplification used when the number of bits is high enough that signal and noise can be considered uncorrelated. It is trivial to contruct a signal that can be quantized with exactly zero error (e.g. a square waveform).

In a perfect photon-counting camera, it would be sufficient for the ADC to have codes corresponding to 1 photons, 2 photons, ... up until sensor saturation point. Then you would only have photon noise. At least with my classical physics understanding.

-h
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 15, 2013, 01:53:57 pm
The quantization error is signal dependent; the uniform distribution is only an engineering simplification used when the number of bits is high enough that signal and noise can be considered uncorrelated. It is trivial to contruct a signal that can be quantized with exactly zero error (e.g. a square waveform).

In a perfect photon-counting camera, it would be sufficient for the ADC to have codes corresponding to 1 photons, 2 photons, ... up until sensor saturation point. Then you would only have photon noise. At least with my classical physics understanding.

-h

Isn't the noise floor measured in the absence of light ??.
Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 15, 2013, 02:30:38 pm
Time out.....

Are we talking apples and oranges?

When I am talking about 'bits', am I not talking about digital data?

Most of the responses, except Erik, seemed to be discussing the analog data, before the conversion to digital.

Is that correct?  Or am I missing something?

John
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 03:45:50 pm
Time out.....

Are we talking apples and oranges?

When I am talking about 'bits', am I not talking about digital data?

Most of the responses, except Erik, seemed to be discussing the analog data, before the conversion to digital.

Is that correct?  Or am I missing something?

John

Bits as in what you're discussing is a concept pertaining to precision in on how much the ADC passes on usable data (detail) from non-usable=(noise). The source which is the sensor trumps in importance concerning usable data over what the ADC can cull through within sensor voltage readings using high bit precision and pass on into 1's and 0's.

By the time you see it as an 8 bit video preview, the precision in culled data has already occurred and pretty much can't be controlled what it delivers unless you can come up with your own ADC software manipulation routine loaded on its chip and that's not going to ever happen with consumer grade digital cameras.

Just curious can you show us how this information is going to help you make better photographs? I've never seen it demonstrated in the many discussions on this subject since 12 & 14 bit concept was associated with digital cameras.
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 15, 2013, 03:49:48 pm
Time out.....

Are we talking apples and oranges?

When I am talking about 'bits', am I not talking about digital data?

Most of the responses, except Erik, seemed to be discussing the analog data, before the conversion to digital.

Is that correct?  Or am I missing something?

John

Hi John,

we are at the same page .. don't worry. All this talk about quantization is about digital data.

BTW what exactly do you mean by "Dynamic Range" ??. Can you give your definition ?.
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 04:06:36 pm
Dynamic range as in more distinguishable detail from what's seen as noise more so in the shadows because highlights have the sensor sites at full saturation.

The 12 bit uses a fine mesh to sift out noise from detail in the shadows where as 14 bit uses an even finer mesh during the Analog to Digital conversion. Think of it like sifting for fine gold flakes. 12 bit will still let detail come through but let in more larger clumps of noise (rocks) while 14 bit will be more precise in allowing smaller noise (rocks) included within the shadow detail.

That way in post with the data interpolated to 16 bit the editing tools have an even more refined culling process in bringing out more definition in the shadows that can be seen over the noise.


That's extended dynamic range in relation to bit depth. It still requires our eyes to see if there really is more usable detail in the shadows that we as humans would consider as more DR.
Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 15, 2013, 04:57:54 pm

BTW what exactly do you mean by "Dynamic Range" ??. Can you give your definition ?.

Thanks, Ilias...

I am sure that I cannot give a definition in any proper scientific way.

If you guys can permit me, I am trying to peel back this "onion", in as simple terms possible.  Kind of like a child's Big Animal book primer....i.e. horsey, horsey, duckie, duckie....

As Erik said, 8 bit coding only holds 256 values per channel.  I look at that as 8 stops of tonal value...8 stops of dynamic range (in my layman's terms)

Remember, I talked about "ceiling", that is, the container (8 bit coding) can (I'm asking) contain maximum 8 stops.

How am I doing so far?

If I'm OK, then 14 bit coding can contain up to 14 stops.  If I want to convert this to 8 bit coding, I have to throw away  6 stops of data range.  This may or not be meaningful data, but it is less range.  How this is significant photographically, and what can be done, is a different discussion.

Quantization, et.al. are important and interesting to those familiar with this area.  However, on a practical basis it is like Newton's Laws....very practical, but not correct.  Of course, they are not correct at the extremes, such as when approaching the speed of light.

Why am I thinking of a Big Animal Picture Book...I think it can eventually have some appicability in eventual instruction at the camera club level, where I am a chair of programs and instruction...trying to get the "unwashed" to understand.

John

PS....not to muddy the water....I look at DXO's rting of the D800 having a DR of 14.33, using their definition of DR.  I look at that and say "interesting"....how does the 10 lbs fit in the 5 lbs bag...??  :-)
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 05:08:20 pm
Don't know how you're going to explain this concept to the "unwashed" at your camera club without showing within image capture how it helps the photographer grab more "usable to a photographer" dynamic range.

What does 8 stops look like from 14 with regard to usable image detail?
Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 15, 2013, 05:14:38 pm
Hey....I am admittedly fishing around.

However, I know telling them the following will not help at all  :-)

Quote
The 12 bit uses a fine mesh to sift out noise from detail in the shadows where as 14 bit uses an even finer mesh during the Analog to Digital conversion. Think of it like sifting for fine gold flakes. 12 bit will still let detail come through but let in more larger clumps of noise (rocks) while 14 bit will be more precise in allowing smaller noise (rocks) included within the shadow detail.

However, once you get the base of coding differences down, you can enter a discussion on how you can fit the meaningful 10 lbs into 5 lbs.
Title: Re: Dynamic Range vs bit depth
Post by: EricV on February 15, 2013, 05:18:36 pm
Dynamic range is not simply dependent on bit depth when the encoding (translation of light into bits) is not linear. 

Example of linear encoding:
   Light =    {1,2,4,8,16,32,64,128,256,512,1024}   scene with 11-bit dynamic range
   Output = {1,2,4,8,16,32,64,128,128,128,128}    8-bit camera captures 8-bit dynamic range

Example of non-linear encoding:
   Light =    {1,2,4,8,16,32,64,128,256,512,1024}   scene with 11-bit dynamic range
   Output = {1,10,20,30,40,50,60,70,80,90,100}    8-bit camera captures 11-bit dynamic range
Title: Re: Dynamic Range vs bit depth
Post by: fdisilvestro on February 15, 2013, 05:36:23 pm
Hi,

Quote
PS....not to muddy the water....I look at DXO's rting of the D800 having a DR of 14.33, using their definition of DR.  I look at that and say "interesting"....how does the 10 lbs fit in the 5 lbs bag...??  :-)

This is a very common misunderstanding. The DR of 14.33 of the Nikon D800 reported by DxO is based on their "Print" concept of resizing the image to 8"x12"at 300 dpi. If you switch to "Screen" then the value you get for the D800 is 13.23, which is less than 14

Let's keep things simple from a theoretical point of view (The issues addressed by other posters about noise, etc. are valid, but I think you have to understand the basic theory before going to those advanced concepts)

The first thing is wheter the digital representation (encoding) is linear or not. Linear means that for each doubling of the input signal (in this case light) you end up with a numerical value that is double from the preivous one. This linear encoding is typical of most digital cameras in raw format and there is a relation between the bit depth and the Maximun DR that can be contained. It is important to understand that this relation does not work both ways.

Example: DR = 14 f stops => theoretically you need at least 14 bits. If you use 13 or less, then you lose DR, using 15 or more does nothing. Think this works like the number of digits that you use for your bank account. If you have a 5 figures balance, then you need at least 5 digits (plus 2 for decimals) to represent your balance. Using more digits will not increase your balance, Using less well, you don't want to do that.

Now, when the encoding is not linear, then the issue is different and it will depend on the mathematical formula used for encoding.
 
It can be shown that using gamma 2.2 encoding you could contain up to 16 f stops of DR with 8 bits of data (that is only doing the math for the values).

Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 15, 2013, 07:45:12 pm
Thanks, Francisco...

If I am correct, most RAW files (CR2, NEF) are linear. 

In camera, it is converted to gamma 2.2 8 bit jpeg.

In ACR/LR that changes at some point to gamma 2.2, but definitely when converted to 16 bit Tiff.  If 8bit can give me 16 stops, what can a 16bit TIFF?  Of course, this is normally converted to 8bit for output.

Without the user doing any tone "wrestling", I guess the difference "depends"....depends on what goes on under the covers.

Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?

I am really not looking for examples of the perfect mathematical world, but examples of what happens in the practical world....and why we bother with RAW....vs. just taking what the camera gives us :-)

John
Title: Re: Dynamic Range vs bit depth
Post by: JohnCox123 on February 15, 2013, 08:02:44 pm
Out of curiosity what's the dynamic range of a film like Kodak Ektar 100 or Fuji Acros?
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 08:10:07 pm
Thanks, Francisco...

If I am correct, most RAW files (CR2, NEF) are linear.  

In camera, it is converted to gamma 2.2 8 bit jpeg.

In ACR/LR that changes at some point to gamma 2.2, but definitely when converted to 16 bit Tiff.  If 8bit can give me 16 stops, what can a 16bit TIFF?  Of course, this is normally converted to 8bit for output.

Without the user doing any tone "wrestling", I guess the difference "depends"....depends on what goes on under the covers.

Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?

I am really not looking for examples of the perfect mathematical world, but examples of what happens in the practical world....and why we bother with RAW....vs. just taking what the camera gives us :-)

John

You're confusing 16 bit interpolated in ACR/LR with the 12 and 14 bit precision performed at the camera's internal ADC which I said previously can't be controlled.

By the time you get the Raw image in ACR/LR the meaning of bits (16) in regards to precision is now being defined as it applies to the level of extreme edits the user can perform in the Raw converter so as not to induce posterization not only in broad swaths of blue sky for example but also when bringing out definition deep down into the shadows without a lot of noise which is what extending dynamic range encompasses.

It is counterproductive for you to equate bits with dynamic range until you can see it in your Raw converter which will greatly increase the capability of applying extreme edits to extend DR more than editing incamera generated 8 bit jpegs which have had a default tone curve applied that crushes shadow detail into the noise floor.

I don't read or consult DxO or dpreview dynamic range measured claims because they use default settings on non-real world targets to base their findings. It doesn't tell me a thing about what anyone can get out of a Raw file in post, the only reason to shoot Raw.

All you have to do to test this is shoot a high dynamic range scene and expose to preserve highlights for a jpeg and then a Raw and notice the differences in shadow detail you can pull out of the Raw compared to the jpeg. You can equate that to bits if you want but there's no way of proving a correlation so why bother.

Title: Re: Dynamic Range vs bit depth
Post by: fdisilvestro on February 15, 2013, 08:37:29 pm

Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?


This is just ideal world, when you account all the other issues, especially noise, you'll get less.
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 15, 2013, 09:46:57 pm
Below is a high dynamic range scene I shot Raw (53mm, 1/250s, f/8, ISO 200) with my 6 year old Pentax K100D 6MP DSLR which only captures at 12 bits internally. It's dynamic range capture capabilities may not be as wide as more modern cameras. The full frame version is the JPEG preview I extracted from the Raw PEF using "Instant JPEG From Raw" at full resolution, downsized for the web.

The second shot is a 400% zoomed in view screen capture of the shadow detail between the 16bit ProPhotoRGB preview of the Raw in ACR on the left versus the 8 bit AdobeRGB JPEG on the right in Photoshop. Note the clumps of jpeg compression even at incamera high quality. Both have a huge S-curve applied to brighten the shadows.

Both previews look a bit similar with the Raw not having so much green spill over as the jpeg but where the real differences lie are in the behavior of the edits in the preview applying tweaks to the S-curve which are far more smoother and easier to control including the black point slider in ACR vs editing the S-curve on the jpeg in gamma encoded Photoshop.

With all the variances that influence the preview including changing DNG profiles on the Raw I couldn't tell 12 bit, from 14 bit from 16 bit having any part in it.  
Title: Re: Dynamic Range vs bit depth
Post by: Simon J.A. Simpson on February 16, 2013, 12:21:47 pm
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

Thx, John

The simple answer is that the dynamic range is determined by:
a)  the ability of the camera sensor
b)  the colour space into which the RAW data is converted (e.g. sRGB can accommodate about 5.3 stops of dynamic range and adobe RGB approximately 8 stops).

But scientifically this is not quite correct since the 'dynamic range' of colour spaces is defined in a different way than 'stops' (a blizzard of scientific correction will probably follow).  But this is not to say that these colour spaces cannot accommodate wider ranges of dynamic range (e.g. 14 stops) – they just have to compress the data in cunning ways.

The number of bits determines how that dynamic range is represented (i.e. the number of different levels of tone – the more bits the more levels of tone between maximum black and maximum white).  Posterisation will most likely only become visible in a low bit non–RAW image which has been manipulated a lot, or in a very low bit-depth image.

See the excellent 'Real World Photoshop' books for a clear and well illustrated explanation.
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 17, 2013, 09:25:53 am
Is bit depth, by definition, a ceiling on the dynamic range an image can contain?

For example, a 14 bit raw image cannot contain more than 14 stops of DR.

An 8 bit jpeg, no more than 8 stops?

An 8 bit JPEG can contain up to 256 stops as long as the source data was appropiately mapped on the 8-bit file. If you map each one stop gap from the original real world scene to match exactly one level in your JPEG file, you'll be encoding 256 stops. Each of them would be poorly represented though, only one level per stop (no gradation at all).

Bit depth is a DR limiting factor in the capture stage. And as long as the encoding is linear, yes, no more than N stops can be captured with an N bit linear ADC. However the real limiting factor is usually noise. So even the best 14-bit sensors cannot capture more than 11 stops of effective DR in photographic applicatioons (i.e. 11 stops with a sufficiently high SNR to make textures distinguisable).
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 17, 2013, 10:58:00 am
The simple answer is that the dynamic range is determined by:
a)  the ability of the camera sensor
b)  the colour space into which the RAW data is converted (e.g. sRGB can accommodate about 5.3 stops of dynamic range and adobe RGB approximately 8 stops).

But scientifically this is not quite correct since the 'dynamic range' of colour spaces is defined in a different way than 'stops' (a blizzard of scientific correction will probably follow).  But this is not to say that these colour spaces cannot accommodate wider ranges of dynamic range (e.g. 14 stops) – they just have to compress the data in cunning ways.

The number of bits determines how that dynamic range is represented (i.e. the number of different levels of tone – the more bits the more levels of tone between maximum black and maximum white).  Posterisation will most likely only become visible in a low bit non–RAW image which has been manipulated a lot, or in a very low bit-depth image.

See the excellent 'Real World Photoshop' books for a clear and well illustrated explanation.

Where can we find those calculations to come at results as 5.3 stop for sRGB and 8.0 stops for AdobeRGB ??.
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 17, 2013, 11:05:01 am
An 8 bit JPEG can contain up to 256 stops as long as the source data was appropiately mapped on the 8-bit file. If you map each one stop gap from the original real world scene to match exactly one level in your JPEG file, you'll be encoding 256 stops. Each of them would be poorly represented though, only one level per stop (no gradation at all).

Bit depth is a DR limiting factor in the capture stage. And as long as the encoding is linear, yes, no more than N stops can be captured with an N bit linear ADC. However the real limiting factor is usually noise. So even the best 14-bit sensors cannot capture more than 11 stops of effective DR in photographic applicatioons (i.e. 11 stops with a sufficiently high SNR to make textures distinguisable).


Guillermo,

DxO measured screen-DR higher than the bit depth for some 12bit models like Sony NEX-6 (12.61) and NEX-7 (12.59) ...
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on February 17, 2013, 11:14:06 am
If it is the DXO downscaling to 8MP that allows cameras of 12 bits to have more than 12 stops of DR, does this mean that if they had been completely noiseless, the measurement would be limited to 12 stops of DR afterall?

If there was no noise, there would be no value in averaging pixels, either?

-h
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 17, 2013, 12:18:04 pm
What does a stop look like?

What image detail is contained in a stop?

How do you define dynamic range?

Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 17, 2013, 02:04:15 pm
If it is the DXO downscaling to 8MP that allows cameras of 12 bits to have more than 12 stops of DR, does this mean that if they had been completely noiseless, the measurement would be limited to 12 stops of DR afterall?

If there was no noise, there would be no value in averaging pixels, either?

-h

The point is that there are at DxO, DR figures greater than the bit depth even without downscaling. NEX-7 screen-DR score (per pixel) is 12.59 stops and after downscaling to 8Mp (print-DR) 13.39.
Although their pixel score comes not from a single pixel but is the average level over a patch which can have 1000 pixels.
Title: Re: Dynamic Range vs bit depth
Post by: fdisilvestro on February 17, 2013, 04:42:26 pm
Hi,

It seems that SONY uses a non-linear encoding to the raw values, as it is shown here (http://forums.dpreview.com/forums/thread/3087841).

Linear encoding is perhaps the easiest (not easy), straightforward method and easy to perform calculations, but in a way is a "brute force" approach.
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 17, 2013, 05:00:45 pm
Guillermo,

DxO measured screen-DR higher than the bit depth for some 12bit models like Sony NEX-6 (12.61) and NEX-7 (12.59) ...

Sure, but that is a statistical measure of no use to the photographer. DxO's SNR criteria is 0dB and a 0dB image is useless in conventional photography. If you use DxO's SNR plots to recalculate the DR with a 12dB criteria the calculated DR will be much lower.

DxO calculations are correct (the way they measure DR can yield higher DR values than the number of bits), but the interpretation of their figures requires some statistical knowledge that a non technical audience hasn't.
Title: Re: Dynamic Range vs bit depth
Post by: Simon J.A. Simpson on February 18, 2013, 04:20:43 am
Where can we find those calculations to come at results as 5.3 stop for sRGB and 8.0 stops for AdobeRGB ??.

They are contained within the IEC definition of the colour spaces.

See attached documents (IEC and Adobe).
Title: Re: Dynamic Range vs bit depth
Post by: Simon J.A. Simpson on February 18, 2013, 04:27:08 am
What does a stop look like?

What image detail is contained in a stop?

How do you define dynamic range?

What does a stop look like?
It's a hole.

What image detail is contained in a stop?
All of the image detail at which the hole is pointed (within the limits of the lens/pinhole).

How do you define dynamic range?
The dynamic range of what ?  Different ‘whats’ different definitions.  Also different assumptions produce different definitions.  Now we’re getting complicated !  See Ansel Adams books (The Negative, The Print) for a discussion on this.

 ;D ;D ;D
Title: Re: Dynamic Range vs bit depth
Post by: Ray on February 18, 2013, 07:31:12 am
In non-technical language, as I understand it, dynamic range is is expressed in terms of Exposure Values (EV).

Although the term EV is synonymous with 'stop', it has nothing to do with DoF and refers only to the amount of exposure the sensor receives, regardless of whatever combination of F/stop and shutter speed is used to achieve such exposure.

Using the term 'stop' instead of EV is also a bit sloppy because all lenses used at the same f/stop do not let through the same amount of light at the same shutter speed. There is a varying degree of transmission loss due to the opacity of the glass and the number of elements.

If Camera A is claimed to have 2EV, or 2 stops better dynamic range than Camera B, then Camera B would need to receive two more EVs, or two stops' greater exposure than Camera A in order for the noise in the deepest shadows, at the limits of the DR, to appear the same as in Camera A, all else being equal of course, including ISO sensitivity.

However, if Camera B receives 2 stops more exposure at the same base ISO, it is likely that SNR in the midtones, including lower and upper midtones, will be better in Camera B than in Camera A, especially if Camera A is a recent Nikon, and Camera B is a Canon.

For example, you'll notice on the DXOMark site that the SNR at 18% figures (SNR around the midtones) are approximately equal for the 5D3 and the D800.  If you overexpose the 5D3 shot to get the deep shadow detail as clean as in the D800 shot, then sure you'll get cleaner midtones than the D800, but you'll also get blown highlights.
Title: Re: Dynamic Range vs bit depth
Post by: sandymc on February 18, 2013, 07:32:04 am
They are contained within the IEC definition of the colour spaces.

See attached documents (IEC and Adobe).

Not so. The Adobe spec has a contrast ratio in it, but that is for the reference viewing environment, not the color space itself.

Sandy
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 18, 2013, 10:39:57 am
What does a stop look like?
It's a hole.

What image detail is contained in a stop?
All of the image detail at which the hole is pointed (within the limits of the lens/pinhole).

How do you define dynamic range?
The dynamic range of what ?  Different ‘whats’ different definitions.  Also different assumptions produce different definitions.  Now we’re getting complicated !  See Ansel Adams books (The Negative, The Print) for a discussion on this.

 ;D ;D ;D

You just come up with that on your own or did you need help? ;D
Title: Re: Dynamic Range vs bit depth
Post by: Simon J.A. Simpson on February 18, 2013, 02:58:11 pm
Not so. The Adobe spec has a contrast ratio in it, but that is for the reference viewing environment, not the color space itself.

Sandy

Sandy, with respect, the contrast ratio refers to the "Reference Display" not the viewing environment (see 4.1 and 4.2.3).   In a leap of faith I assumed the contrast ratio of the "Reference Display" was defined in order to encompass the 'dynamic range' of the the colour space.  Perhaps I am mistaken ?

Using 'stops' (or better EVs) to define dynamic range is, I know, scientifically incorrect; but for photographers like me (and perhaps others too) it is a useful way of approximating the dynamic range of one thing to another – a kind of rule of thumb if you will.  Otherwise one is forced to compare maximum densities, contrast ratios, sensitivities – all scientifically defined in entirely different ways.  I know this is heresy and I humbly apologise.
Title: Re: Dynamic Range vs bit depth
Post by: Simon J.A. Simpson on February 18, 2013, 02:59:26 pm
You just come up with that on your own or did you need help? ;D

I need help; lots of help, all the time.  And, yes, I am taking the tablets.
 ;D
Title: How many bits of information can be recorded through an n-bit linear ADC'
Post by: Jack Hogan on February 19, 2013, 09:33:39 am
Sure, but that is a statistical measure of no use to the photographer.

If you are saying that the 0dB lower signal range is of no direct use to a photographer you are probably right.  On the other hand there is some information worth recording there, as you yourself have shown in the past.

There is no way to know what the noise or the signal is if all we have is a single pixel.  Noise, signal and many other physical properties of light, are statistical in nature and therefore require a larger sample to determine them.  How large?  Every human (photographer or not) physically 'views' things by averaging light within a circle of confusion.   Photographic output for instance is typically viewed in samples of a few tens of pixels, if we take the typical definition of the CoC for APS-C or FF sized sensors - more or less what Bill Claff uses for his calculations. 

So the question 'How many bits of information can I record through an n-bit linear ADC' seems to me to depend as much on sample size as on the relative size of random noise to an ADU.  I would venture that with appropriate noise and sample sizes, one could enconde a very large Dynamic Range even with n=1, let alone n=14 (witness your average newspaper image or 1-bit ADCs in audio).  So DxO's readings are very useful as they are, and spot on.  The only question is how large of a sample they use to calculate their data.  Anyone?

My Information Science is somewhat lacking.  Does anyone here know how to derive a formula that can answer the question in terms of sample size and noise present in the channel?

Jack
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 19, 2013, 09:36:14 am
Isn't the noise floor measured in the absence of light ??.
No, in imaging typically Dynamic Range is calculated as the maximum signal divided by the signal at a given SNR.  DxO uses SNR=1 (=0dB).  Bill Claff uses SNR=20 within the Circle of Confusion.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 19, 2013, 09:55:20 am
In non-technical language, as I understand it, dynamic range is is expressed in terms of Exposure Values (EV).

Dynamic Range is a unitless ratio, typically the maximum signal divided by the minimum useful signal that can be recorded/reproduced - this last bit depends on typical use in the specific discipline.  It can be expressed as the number of doublings (log2).  In photographic circles a doubling of signal is normally referred to as a 'Stop'.

For instance, the maximum signal that can be recorded by an A99 at ISO50 is about 59,000 electrons.  SNR is equal to 1 when the signal is about 6.4 electrons.  So engineering DR as used by DxO is about 59000/6.4=9200 which is equivalent to about 13.2 stops.  Not bad for a 12 bit sensor  ;)
Title: Re: Dynamic Range vs bit depth
Post by: Bart_van_der_Wolf on February 19, 2013, 10:05:14 am
No, in imaging typically Dynamic Range is calculated as the maximum signal divided by the signal at a given SNR.  DxO uses SNR=1 (=dB).  Bill Claff uses SNR=20 within the Circle of Confusion.

Hi Jack,

Well, the absence of light leaves only the read noise (+dark current which exists above 0 Kelvin and accumulates with time, to be even more accurate), which is used for the engineering definition of DR (http://www.ccd.com/ccd111.html). Other levels of SNR >1 are arbitrary and, while they do correspond more to usable/practical shadow noise levels, AFAIK there is no universally accepted minimum SNR (sometimes we need more, sometimes less). An SNR of 20 may be way to noisy for some, while others find it still acceptable (e.g. after using a good noise reduction algorithm that spares detail).

Another issue with SNR>1 is that some cameras can use noise reduction on the image data before writing it to the Raw data file. While that may help to reduce the noise and artificially boost the DR, it no longer says anything about the sensor quality (and whether noise reduction was used, because the reference is missing).

So from a technical point of view, I think that the engineering definition gives the best impression of the quality of the electronics involved, and DR numbers based on arbitrary (low exposure level) noise limits can give an impression of how noisy the shadows in in image may look only at that particular signal level (it will be noisier still at lower levels though, so how low do you really need to go?).

Cheers,
Bart
Title: Re: Dynamic Range vs bit depth
Post by: thierrylegros396 on February 19, 2013, 11:54:06 am
Hi Jack,

Another issue with SNR>1 is that some cameras can use noise reduction on the image data before writing it to the Raw data file. While that may help to reduce the noise and artificially boost the DR, it no longer says anything about the sensor quality (and whether noise reduction was used, because the reference is missing).

Cheers,
Bart

I think there are now a lot of cameras that use that way to artificially "improve" their sensors.

But the drawback is that the deep shadows are no more linear.

And when you want to "push" the shadow in your Raw converter, you may have some surprises  ;) ;)

Have a Nice Day.

Thierry
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on February 19, 2013, 11:57:15 am
Thanks for that "CCD University" link, Bart. Very informative.

The Anti-blooming section told a lot about my own camera's sensor behavior. I know I don't have an anti-blooming gate which does affect how far I go with ETTR shooting Raw of sunlit pastel stone texture or tree bark, a PITA to deal with in post requiring a long session of cloning.

It'll appear on the camera's LCD histogram that I exposed just right with no clipped flashing indicators on the incamera preview, but zoomed in at 100% in ACR shows tiny white or full saturation yellow spots peppered all over these kind of brightly lit textures indicating I should've reduced exposure by maybe a 1/3 of a stop.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 19, 2013, 01:52:08 pm
Well, the absence of light leaves only the read noise (+dark current which exists above 0 Kelvin and accumulates with time, to be even more accurate), which is used for the engineering definition of DR (http://www.ccd.com/ccd111.html).

Hey Bart,

Yes, different folks choose different lower limits depending on application, that's why I said 'typically'.  Other than for circuit designers where I can see the benefit of eDR, I think that a given total SNR is a more relevant indicator of IQ for Photograhers, hence often referred to as such (http://www.luminous-landscape.com/forum/index.php?topic=42158.0).


So from a technical point of view, I think that the engineering definition gives the best impression of the quality of the electronics involved, and DR numbers based on arbitrary (low exposure level) noise limits can give an impression of how noisy the shadows in in image may look only at that particular signal level (it will be noisier still at lower levels though, so how low do you really need to go?).

Indeed.  On the other hand the latter may give a better idea of the working range of one's instrument.

Jack
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 20, 2013, 03:51:05 pm
No, in imaging typically Dynamic Range is calculated as the maximum signal divided by the signal at a given SNR.  DxO uses SNR=1 (=0dB).  Bill Claff uses SNR=20 within the Circle of Confusion.

At that stage the discussion was about engineering DR and where the noise floor is measured in the absence of signal.
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 20, 2013, 04:04:25 pm
...

For instance, the maximum signal that can be recorded by an A99 at ISO50 is about 59,000 electrons.  SNR is equal to 1 when the signal is about 6.4 electrons.  So engineering DR as used by DxO is about 59000/6.4=9200 which is equivalent to about 13.2 stops.  Not bad for a 12 bit sensor  ;)

Inspecting Α99's raw histogramm with rawdigger shows that at the dark side (512-2000 ADU) it's a 13bit data in 14bit container. Check "Sony ARW2 hack" at properties to see the correct raw histogramm.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 21, 2013, 06:26:39 am
Inspecting Α99's raw histogramm with rawdigger shows that at the dark side (512-2000 ADU) it's a 13bit data in 14bit container. Check "Sony ARW2 hack" at properties to see the correct raw histogramm.

I see, so the A99 is not a good example for the discussion at hand :-)

The FWC and signal at SNR=1 in the post quoted earlier were calculated extrapolating DxO's full SNR curves graphically, so bit depth or non-linear coding of the Raw files should not be a limiting factor on DR as long as the curves represent the substance of the SNR information.
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 21, 2013, 04:23:19 pm
Is it typical, without user involvement, that gamma encoding would contain 16 stops in 8bits....or is that just a perfect world?

The discusion 'bits vs DR' only makes sense at the capture stage, i.e. when considering RAW files which are linear. 8-bit JPEG files and 16-bit TIFF files are processed image files, and they could hypothetically contain up to 256 and 65536 stops of DR, as long as you devote a single tonal level to represent each stop.

The role of gamma is far more interesting than the DR discusion. In an integer encoding (e.g. 8-bit JPEG or 16-bit TIFF files), the gamma expansion is what allows to redistribute the available levels (256 and 65536) so that both shadows and highlights are represented by a sufficiently high number of levels.

See what happens when we encode a real world scene with about 16 stops of DR (it was made noisefree thanks to multiexposure):

(http://www.guillermoluijk.com/article/superhdr/progev.jpg)


into a 16-bit TIFF file with 2.2 gamma:

(http://www.guillermoluijk.com/article/superhdr/gamma2.2.jpg)


and with 1.0 gamma (linear encoding):

(http://www.guillermoluijk.com/article/superhdr/gamma1.0.jpg)

Deep shadows get posterized.

Regards
Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 21, 2013, 05:10:10 pm
Thanks, Guillermo...

I know that a lot of magic can be do to fit 10 lbs into a 5 lbs bag.....dodging/burning, negative development, tonal compression, etc.

My tern, without user involvement, was poorly chosen.....and a lot of the responses took off into interesting, but not necessarily practical directions...so I sort of gave up.

Let me ask it a little different.


In camera, the raw image gets converted to jpeg.  If you started withy. Raw image of, say, 12 stops of DR....what would one expect the DR of the jpeg to be.

Take the same Raw image to ACR/LR....on opening, what would the DR be...about...not excruciatingly scientifically correct.

Again, I may not have asked this correctly, but I think you may understand where I am basically heading.

John
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 21, 2013, 05:50:31 pm
Let me ask it a little different.


In camera, the raw image gets converted to jpeg.  If you started withy. Raw image of, say, 12 stops of DR....what would one expect the DR of the jpeg to be.

Take the same Raw image to ACR/LR....on opening, what would the DR be...about...not excruciatingly scientifically correct.

The DR contained in the JPEG file depends on the camera processing, not on the capabilities of the 8-bit JPEG format. A captured RAW files properly processed into a JPEG file, will contain and will display all the captured DR. But usually camera software clips some highlights which were intact in the RAW file, and clip to black deep shadows information, so the information contained and displayed in the JPEG is less than the data in the RAW file. But I insist, this is not a problem nor limitation of the 8-bit JPEG format, but the result of camera processing (white balance, contrast curve, saturation,...).

Regards.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 21, 2013, 06:49:40 pm
Deep shadows get posterized.

I am not sure I understand.  Gamma only works its magic by encoding more linear shadow bits into fewer non-linear ones - for instance when displaying 16 bit data through an 8 bit, color managed video system.  It does nothing but create rounding errors (http://www.flickr.com/groups/capturenx/discuss/72157625305829069/) when storing linear 16 bit data encoded non-linearly in the same 16 bits and then displaying them through an 8 bit, color managed video system.

It'd be interesting to see your last two images as displayed by Photoshop CS's well behaved ACE color engine to see whether they show posterizaton (they shouldn't, but who knows ;-).

Jack
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 21, 2013, 07:42:05 pm
I am not sure I understand.  Gamma only works its magic by encoding more linear shadow bits into fewer non-linear ones - for instance when displaying 16 bit data through an 8 bit, color managed video system.  It does nothing but create rounding errors (http://www.flickr.com/groups/capturenx/discuss/72157625305829069/) when storing linear 16 bit data encoded non-linearly in the same 16 bits and then displaying them through an 8 bit, color managed video system.

It'd be interesting to see your last two images as displayed by Photoshop CS's well behaved ACE color engine to see whether they show posterizaton (they shouldn't, but who knows ;-).

Jack

They posterize when the 16-bit TIFF is linear. Those images come from Photoshop.
These are the levels devoted to each stop using linear and 2.2 gamma:

(http://www.guillermoluijk.com/article/superhdr/tabla.gif)

If you have valid information in the very deep shadows (stops numbered above as -12, -13,...), when you lift them with a strong exposure correction up, the linear image displays posterization because of the lack of levels to encode an entire stop.

Regards
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 22, 2013, 01:31:49 pm
They posterize when the 16-bit TIFF is linear. Those images come from Photoshop.
These are the levels devoted to each stop using linear and 2.2 gamma:

If you have valid information in the very deep shadows (stops numbered above as -12, -13,...), when you lift them with a strong exposure correction up, the linear image displays posterization because of the lack of levels to encode an entire stop.

I see.  However, I still do not understand: you may have 19 bits of information, but if that information is originally encoded linearly as 16 bit data, as long as you stay at 16 bits gamma 1 or gamma 2.2 is going to behave similarly, other than for rounding errors.  For instance, where is the input data to fill-in the levels below 424 in the gamma encoded file below going to come from?

(http://i.imgur.com/nyEtofJ.jpg)

You need linear data of more than 16 bit depth as the input file to take advantage of gamma encoding at 16 bits, which I didn't think was the case here, right?

Imho the posterization in your image is introduced somewhere in the 8-bit video display chain, probably by poorly behaved color management which is unprepared to deal with gamma 1.0

Jack
Title: Re: Dynamic Range vs bit depth
Post by: IliasG on February 22, 2013, 03:34:43 pm
I see.  However, I still do not understand: you may have 19 bits of information, but if that information is originally encoded linearly as 16 bit data, as long as you stay at 16 bits gamma 1 or gamma 2.2 is going to behave similarly, other than for rounding errors.  For instance, where is the input data to fill-in the levels below 424 in the gamma encoded file below going to come from?

(http://i.imgur.com/nyEtofJ.jpg)

You need linear data of more than 16 bit depth as the input file to take advantage of gamma encoding at 16 bits, which I didn't think was the case here, right?

Imho the posterization in your image is introduced somewhere in the 8-bit video display chain, probably by poorly behaved color management which is unprepared to deal with gamma 1.0

Jack

Hi Jack,

Gulliermo's point is that with gamma encoding we need less bits to keep unposterized data at the dark, than with linear encoding. You could see the missing input data (424 to 0) in a comparison of 19bit linear vs 16 bit g2.2.

In the case of sRGB which at the starting darker tones has a linear part with a slope 12.92 the data density is equal to a linear with +3.69 bit depth. So it's OK with keeping unposterized 19bit linear data.
In the case of Rec.709 the slope is 4.5 and it's equal in data density to +2.17 bits

It's a pity that we are stuck on only 2-3 bit depths (8 or 16 int plus 32 float) for RGB data ..
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 22, 2013, 06:05:31 pm
Hi Jack,

Gulliermo's point is that with gamma encoding we need less bits to keep unposterized data at the dark, than with linear encoding. You could see the missing input data (424 to 0) in a comparison of 19bit linear vs 16 bit g2.2.

In the case of sRGB which at the starting darker tones has a linear part with a slope 12.92 the data density is equal to a linear with +3.69 bit depth. So it's OK with keeping unposterized 19bit linear data.
In the case of Rec.709 the slope is 4.5 and it's equal in data density to +2.17 bits

It's a pity that we are stuck on only 2-3 bit depths (8 or 16 int plus 32 float) for RGB data ..

Glad to see that Mr. Luijk has lawyers ;-)  I still do not understand how 16 bit linear data encoded with gamma 1.0 results in posterization when the exact same 16 bit linear data encoded with gamma 2.2 does not.  Unless of course the fault lies with a poorly behaved color management system instead of with gamma encoding ...  :)
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 22, 2013, 06:55:43 pm
I still do not understand how 16 bit linear data encoded with gamma 1.0 results in posterization when the exact same 16 bit linear data encoded with gamma 2.2 does not.

I didn't say the source data was 16 bit linear. It was 64-bit floating point built from a multiexposure blend (5 shots 3 stops apart). In that situation, after proper conversion to 16-bit integer the linear gamma didn't manage to prevent posterization while 2.2 gamma did.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 23, 2013, 04:29:13 am
I didn't say the source data was 16 bit linear. It was 64-bit floating point built from a multiexposure blend (5 shots 3 stops apart). In that situation, after proper conversion to 16-bit integer the linear gamma didn't manage to prevent posterization while 2.2 gamma did.

Ah, I see, but  OT and misleading as far as this thread is concerned.  The question was:

Is bit depth, by definition, a ceiling on the dynamic range an image can contain?
For example, a 14 bit raw image cannot contain more than 14 stops of DR.
An 8 bit jpeg, no more than 8 stops?

The ansewr is pretty staightforward: no, bit depth is not a ceiling on the amount of information that an image can contain; and yes, a 14-bit raw image can contain more than 14 stops of DR if by DR we mean one of its typical engineering definitions - with linear encoding, without having to resort to non-linear gamma.

And for anyone starting from a typical raw image today rendered through a modern raw converter, it makes virtually no difference as far as visible posterization is concerned whether the final 16-bit TIFF contains linear or gamma 2.2 encoded data ;-)

Cheers,
Jack
Title: Re: Dynamic Range vs bit depth
Post by: Guillermo Luijk on February 23, 2013, 08:06:26 am
The ansewr is pretty staightforward: no, bit depth is not a ceiling on the amount of information that an image can contain; and yes, a 14-bit raw image can contain more than 14 stops of DR if by DR we mean one of its typical engineering definitions - with linear encoding, without having to resort to non-linear gamma.

I disagree with that. No matter if a statistical (or extrapolated from a curve) definition of DR yields DR figures greater than the number of ADC bits. A practical user (photographer) will not be able to allocate and properly render information of a real world scene of N stops of DR in a RAW file produced by a linear ADC with less than N bits.

Unless you use, let's say a 24Mpx sensor to build small 50x50px icons, where resizing will improve SNR and number of tonal values to acceptable levels of DR over the number of original bits, yes, bit depth is a limiting factor for DR in real world photographic applications.
Title: Re: Dynamic Range vs bit depth
Post by: sandymc on February 23, 2013, 09:29:20 am
I think that perhaps there is some confusion about two different situations here:

1. The number of bits in an image representation (representation being being NEF or CR2 or TIFF or whatever) does not limit DR, because you can use whatever encoding you want to achieve an arbitrary level of DR.

2. The number of bits in the ADC of a camera does (assuming a linear ADC and linear sensor) fundamentally limit DR.

Point being, the number of bits in the camera's ADC, and the number of bits in a representation of an image are not the same thing.

Sandy
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 23, 2013, 09:34:11 am
The ansewr is pretty staightforward: no, bit depth is not a ceiling on the amount of information that an image can contain; and yes, a 14-bit raw image can contain more than 14 stops of DR if by DR we mean one of its typical engineering definitions - with linear encoding, without having to resort to non-linear gamma.
I disagree with that. No matter if a statistical (or extrapolated from a curve) definition of DR yields DR figures greater than the number of ADC bits. A practical user (photographer) will not be able to allocate and properly render information of a real world scene of N stops of DR in a RAW file produced by a linear ADC with less than N bits.

I am glad we are finally getting to the heart of the matter, after a little cajoling on my part ;-)  So, if I understand correctly, you agree with me in theory.  But in practice.....

In the real world an arbitrary number of stops of DR can be stored in or produced from a linear raw file with a bit-depth of one.  A practical example is a B&W image from your typical run of the mill newspaper viewed at arm's length: 1=ink, 0=noink.  The natural scene was a foggy day in Berlin with a DR of 5 stops.  It was captured and converted for newspaper use to a file with a bit-depth of 1 bit. The viewed image has a DR of 6 stops, determined by the physical properties of ink and paper independently of the bit depth of the file from which it was produced.  Despite its 1-bit depth the viewed image also shows several stops of (smoothish) tonal gradations in between, as determined by the physical characteristics of the human visual system.

5 to 1 to 6.  So it appears that file bit depth, Dynamic Range and Tonal Range are not really directly related - with appropriate noise level and viewing distance we can record many more than N stops of DR in an N bit linear file.  That's the deterministic answer.  But I know that in fact they are tied together in the statistical dimension, so who's up on Information Science who can tie these quantities together?  We need to hear words like quanta, standard deviation, sample size etc. :)

Jack
Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 23, 2013, 10:00:51 am
Guys......

1-bit depth...interesting. 64-bit floating point...also interesting.  To some.

Guillermo said what I, the OP, was looking for....."A practical user (photographer)..."  What can they expect....straight out of the camera?
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 23, 2013, 10:14:23 am
"A practical user (photographer)..."  What can they expect....straight out of the camera?

If you are talking about DR, as mentioned it depends on the physical characteristics of your eyes, your output device and your viewing setup.  Most Output devices (high end photo paper, monitors) struggle to produce 9 bits of DR (sometimes expressed in the form of a linear contrast ratio, e.g. 500:1), so that's the most you would typically get by looking at the image OOC - often less.

What do you need the additional range that your camera is able to capture for, then?  Well, perhaps you made a mistake in choosing exposure or perhaps there are some deep shadows that you'd like to bring up in PP to squeeze into the visible DR.   Modern cameras do this for you automatically nowadays if you turn on the relative in-camera feature (Nikon calls it ADL).

Jack
Title: Re: Dynamic Range vs bit depth
Post by: jrsforums on February 23, 2013, 10:47:06 am
If you are talking about DR, as mentioned it depends on the physical characteristics of your eyes, your output device and your viewing setup.  Most Output devices (high end photo paper, monitors) struggle to produce 9 bits of DR (sometimes expressed in the form of a linear contrast ratio, e.g. 500:1), so that's the most you would typically get by looking at the image OOC - often less

What do you need the additional range that your camera is able to capture for, then?  Well, perhaps you made a mistake in choosing exposure or perhaps there are some deep shadows that you'd like to bring up in PP to squeeze into the visible DR.   Modern cameras do this for you automatically nowadays if you turn on the relative in-camera feature (Nikon calls it ADL).

Jack

I understand.  Which is why I said before any user manipulation.

When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....whether the monitor can show it or not.  Each has a range of information that the user can manipulate to produce the output they want.

What s the difference between these two sources?
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 23, 2013, 12:14:09 pm
When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....whether the monitor can show it or not.  Each has a range of information that the user can manipulate to produce the output they want.

The easy qualitative answer to your question is that there is a lot more information in the Raw file than in the OOC Jpeg - a lot  ;)  For instance, if you underexposed and needed to increase brightness in PP, the 8-bit Jpeg would start showing visible posterization and other artifacts very quickly, while the Raw file would probably continue to look pleasing while you increased brightness a few more stops. That's the easy answer that depends on qualitative words like 'visible' and 'pleasing'.

But there is no easy quantitative answer, that's why I asked it myself in a different form a couple of pages back, and again a couple of posts up - it's so hard that nobody here seems to be up to the task  :(  It depends on the nature of the information and the observer, on noise wrt the size of an ADU, sample size etc. There are too many variables involved and it needs to be addressed by someone who is better versed in Information Science than I am.   Even if someone were able to calculate the capacity of these two specific channels, we probably would not know what to do with that bit of information in practice  :)

I can answer only a portion of your question, to help you understand why there is no easy answer:

When you open a in-camera produced jpeg or a raw image in a postprocessing program, say ACR/LR, each of these images has a general range of DR....

The fact is, there is no range of DR inherent in a file of a specific bit-depth, whether the data is encoded linearly or not.  With a large enough sample and appropriately sized noise (not too big not too small, just the size of Montreal) the sky is the limit - remember the 1-bit newspaper image?

Jack
Title: Re: Dynamic Range vs bit depth
Post by: thierrylegros396 on February 23, 2013, 02:38:16 pm
Think also to HDR imaging software use of "Local contrast enhancement" to obtain fake high DR in paper or screen.

Yes, DR is not linearly related to bit depth ;)

Thierry
Title: Re: Dynamic Range vs bit depth
Post by: bjanes on February 23, 2013, 03:14:55 pm
I disagree with that. No matter if a statistical (or extrapolated from a curve) definition of DR yields DR figures greater than the number of ADC bits. A practical user (photographer) will not be able to allocate and properly render information of a real world scene of N stops of DR in a RAW file produced by a linear ADC with less than N bits.


I am glad we are finally getting to the heart of the matter, after a little cajoling on my part ;-)  So, if I understand correctly, you agree with me in theory.  But in practice.....

In the real world an arbitrary number of stops of DR can be stored in or produced from a linear raw file with a bit-depth of one.  A practical example is a B&W image from your typical run of the mill newspaper viewed at arm's length: 1=ink, 0=noink.  The natural scene was a foggy day in Berlin with a DR of 5 stops.  It was captured and converted for newspaper use to a file with a bit-depth of 1 bit. The viewed image has a DR of 6 stops, determined by the physical properties of ink and paper independently of the bit depth of the file from which it was produced.  Despite its 1-bit depth the viewed image also shows several stops of (smoothish) tonal gradations in between, as determined by the physical characteristics of the human visual system.

5 to 1 to 6.  So it appears that file bit depth, Dynamic Range and Tonal Range are not really directly related - with appropriate noise level and viewing distance we can record many more than N stops of DR in an N bit linear file.  That's the deterministic answer.  But I know that in fact they are tied together in the statistical dimension, so who's up on Information Science who can tie these quantities together?  We need to hear words like quanta, standard deviation, sample size etc. :)

Jack

Jack,

I'm not up to the task you suggested, but I post these comments for discussion:


Emil Martinec states that raw data are never posterized (presumably because the noise is greater than the quantization step). Posterization occurs during processing when too few bits are used for the file after adjustments are applied. Greg Ward (http://www.anyhere.com/gward/hdrenc/hdr_encodings.html) has an interesting post on HDR formats. He uses 1% as the maximal difference in levels, consistent with the Weber-Fechner law, but more stringent than Norman's criterion.

The table below is from his post and describes the DR of various formats in orders of magnitude (log base 10). To convert to f/stops multiply by log(2, base 10) = 3.213. The math is done for you in the table below. scRGB comes in two versions. One uses 12 bits per channel (36 bits total) and a gamma curve with a linear segment for low levels, and the other uses a linear ramp with 16 bits per channel (48 bits total). Raw files are linear and the 48 and scRGB would apply to 16 bit raw files.

Bill
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on February 24, 2013, 03:58:06 am
If you have a hypothetical 1-bit camera with sufficently dense sensels so as to capture practically all of the information presented by the lense.... What is the dynamic range of your raw file? 6dB? Bearing in mind that this camera would capture more information about the scene than even the highest tech current Sony 14-bit sensors.

I am not so sure that it makes much sense to distinguish so abruptly between spatial precision and level precision.

-h
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 24, 2013, 06:19:10 am
Greg Ward (http://www.anyhere.com/gward/hdrenc/hdr_encodings.html) has an interesting post on HDR formats. He uses 1% as the maximal difference in levels, consistent with the Weber-Fechner law, but more stringent than Norman's criterion.

The table below is from his post and describes the DR of various formats in orders of magnitude (log base 10). To convert to f/stops divide by log(2, base 10) = 3.213. The math is done for you in the table below. scRGB comes in two versions. One uses 12 bits per channel (36 bits total) and a gamma curve with a linear segment for low levels, and the other uses a linear ramp with 16 bits per channel (48 bits total). Raw files are linear and the 48 and scRGB would apply to 16 bit raw files.

Bill,

Thank you very much for the Greg Ward link and your table, most interesting.  I get his definition of Scene Referred, Human Observed encoding and LogLuv32, although I wonder what negative luminance is  ;)  If I understand correctly, with this efficient encoding we could store in a TIFF with 32 bits per pixel (or the equivalent of about 11 bits/channel in RGB) whatever nature could throw at us (126 stops of luminance DR!) so that it would appear to us virtually indistinguishable from the original .   That's a very cool estimate of the information Capacity of the human visual system.

Although perhaps a bit far from our target, limited by the linear RGB world of our Raw files.  How many Scene Referred, Human Observed bits of information can be stored in there?  Are you suggesting that scRGB16 is a good proxy for it and therefore we could say: 11.6?  Seems a bit low at first look.
Jack
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 24, 2013, 06:20:15 am
I am not so sure that it makes much sense to distinguish so abruptly between spatial precision and level precision.

Yes! Perhaps we could attempt to tie the two together?
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on February 24, 2013, 06:54:21 am

  • The perceptual system differentiates between relative levels, not absolute levels as described by the Weber-Fechner law which states that the perceptual difference is 1% (see Norman Koren (http://www.normankoren.com/digital_tonality.html)).
This seems to be in agreement with the good old gamma FAQ of Charles Poynton:
http://www.poynton.com/PDFs/GammaFAQ.pdf
Quote
Through an amazing coincidence, vision’s response to intensity is effec- tively the inverse of a CRT’s nonlinearity.
...
Projected cinema film, or a photographic reflection print, has a contrast ratio of about 80:1. Television assumes a contrast ratio, in your living room, of about 30:1. Typical office viewing conditions restrict the contrast ratio of a CRT display to about 5:1.
...
At a particular level of adaptation, human vision responds to about a hundred-to-one contrast ratio of intensity from white to black. Call these intensities 100 and 1. Within this range, vision can detect that two intensi- ties are different if the ratio between them exceeds about 1.01, corre- sponding to a contrast sensitivity of one percent.
To shade smoothly over this range, so as to produce no perceptible steps, at the black end of the scale it is necessary to have coding that represents different intensity levels 1.00, 1.01, 1.02, and so on. If linear light coding is used, the “delta” of 0.01 must be maintained all the way up the scale to white. This requires about 9,900 codes, or about fourteen bits per compo- nent.
If you use nonlinear coding, then the 1.01 “delta” required at the black end of the scale applies as a ratio, not an absolute increment, and progresses like compound interest up to white. This results in about 460 codes, or about nine bits per component. Eight bits, nonlinearly coded according to Rec. 709, is sufficient for broadcast-quality digital television at a contrast ratio of about 50:1.
If poor viewing conditions or poor display quality restrict the contrast ratio of the display, then fewer bits can be employed.
If a linear light system is quantized to a small number of bits, with black at code zero, then the ability of human vision to discern a 1.01 ratio between adjacent intensity levels takes effect below code 100. If a linear light system has only eight bits, then the top end of the scale is only 255,and contouring in dark areas will be perceptible even in very poor viewing conditions.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on February 24, 2013, 11:29:10 am
Greg Ward (http://www.anyhere.com/gward/hdrenc/hdr_encodings.html) has an interesting post on HDR formats.

Interesting read.  For those still in doubt, he says

"A 48-bit RGB pixel using a standard 2.2 gamma as found in conventional TIFF images holds at least 5.4 orders of magnitude" of DR.

He gets that from dividing the maximum 16-bit/channel integer value (65535 DN) by the minimum value, which I assume he supposes to be at most a quantization error of 0.29 DN.   So a 16 bit/channel integer file holds at least log2(65535/0.29) = 17.8 stops of DR (5.4 orders of magnitude), independently of whether the data is gamma encoded or linear, since maxima and minima are the same.  A 14 bit/channel Raw file would by the same token hold at least log2(16383/0.29) = 15.8 stops.

Why at least?  Because depending on sampling and noise dithering, the quantization error can become much smaller than that - all the way to immaterial, so that the dominant determinant of minimum detectable signal would be factors outside of quantization and data type.

Jack

Title: Re: Dynamic Range vs bit depth
Post by: bjanes on February 25, 2013, 08:57:27 am
Although perhaps a bit far from our target, limited by the linear RGB world of our Raw files.  How many Scene Referred, Human Observed bits of information can be stored in there?  Are you suggesting that scRGB16 is a good proxy for it and therefore we could say: 11.6?  Seems a bit low at first look.
Jack

Jack,

That figure is low since Greg's requirement for 1% steps in the shadows is very stringent. We can usually get by with fewer and Norman Koren suggests 8 steps. This translates to 9% steps (1.09^8≈2). The take home point is that a bit depth of 16 with a linear ramp is not true high dynamic range, and a bit depth of 14 would fall even more short of HDR.

Bill
Title: Re: Dynamic Range vs bit depth
Post by: tarlijade on April 24, 2013, 01:40:08 am
I have a question, If possible can anyone answer this for me, i've been researching for hours and am still struggling to understand and word this question.
"How does bit depth influence image reproduction? and What implications does it have in determining exposure?"
I am a beginner and am failing to answer this.
Help would be rather appreciated thanks x ???
Title: Re: Dynamic Range vs bit depth
Post by: ErikKaffehr on April 24, 2013, 02:21:24 am
Hi,

A bit is the smallest value a computer uses, it either one or zero. Bit depth is how many bits are used to represent a channel (Red, Green or Blue).

Each bit in bit depth corresponds to an EV. So would it not be for noise 12 bits would cover 12 EV and 16 bits would cover 16 EV, but there is always noise, and todays best devices have about 13 EV of dynamic range (maximum signal divided by noise), so there is some merit to 14 bits but no merit of having more than 14 bits.

The number of bits would not affect exposure at all. If a sensor has a wider dynamic range you can extract more shadow detail.

Best regards
Erik


I have a question, If possible can anyone answer this for me, i've been researching for hours and am still struggling to understand and word this question.
"How does bit depth influence image reproduction? and What implications does it have in determining exposure?"
I am a beginner and am failing to answer this.
Help would be rather appreciated thanks x ???
Title: Re: Dynamic Range vs bit depth
Post by: Bart_van_der_Wolf on April 24, 2013, 02:33:56 am
"How does bit depth influence image reproduction? and What implications does it have in determining exposure?"

Hi,

When more (smaller) increments/steps in brightness can be encoded, the brightness gradients will be smoother. It also allows to boost local contrast more accurately without creating huge jumps and related noise amplification.

As an analogy, you could think of it as riding down an stone staircase on a bike. The smaller the step height increments, the smoother the ride.

Also consider, that bit depth is going to encode the Raw data, which then has to undergo at least a gamma conversion which amplifies the shadow differences and reduces the highlight differences.

Cheers,
Bart
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on April 24, 2013, 07:52:51 am
As an analogy, you could think of it as riding down an stone staircase on a bike. The smaller the step height increments, the smoother the ride.
Except if the staircase is covered with a layer of sand. If that is the case, you may not notice the steps.

-h
Title: Re: Dynamic Range vs bit depth
Post by: hjulenissen on April 24, 2013, 08:06:13 am
Why at least?  Because depending on sampling and noise dithering, the quantization error can become much smaller than that - all the way to immaterial, so that the dominant determinant of minimum detectable signal would be factors outside of quantization and data type.
Ignoring band-limiting, filtering and such:
A perfect square-wave has only two levels. If those two levels happens to fall on quantizer levels (1 bit would do), the quantization noise would be exactly zero. SNR is infinite.

Standard engineering approaches to discrete sampling assumes that signal and quantization error is uncorrelated and one or both uniformly distributed, thus the quantization "noise" can be calculated independent of signal. This leads to the SNR=6.02*#bits formula.

Dither is the willful introduction of more noise, usually with the intent of encoding more (low-frequency) levels. Noiseshaping can move this noise into frequencies where it is less annoying.

-h
Title: Re: Dynamic Range vs bit depth
Post by: Tim Lookingbill on April 25, 2013, 01:39:47 pm
I found a demonstration online that might help you understand why this 12 bit vs 14 bit concept with regard to a camera's Analog to Digital conversion process for Raw capture is pretty much meaningless and fraught with other variables that make it impossible to prove its efficacy.

The link below is a comparison study of dynamic range (more DR the more bits you need to render) between two 14 bit A/D "PRECISION" processing cameras, the Canon 7D and the Pentax K5. All you have to look at is the underexposed by -5EV (compared to normal "looking" exposure) Raw shots of a typical backlit shaded outdoor scene. Scroll down below these dark looking shots to the normalized edited version (of the -5EV only) that you'll have to download and examine at 100% view and note the level of noise and amount of detail pulled out of the darkest areas between these two images.

http://www.pentaxforums.com/reviews/canon-7d-vs-pentax-k-5-review/image-quality.html#drtest

Here is a link to a discussion on the importance of 14 bit that you may find of further interest...

http://www.dpreview.com/forums/post/24452969

If you are referring to the concept of bit depth with regard to 16 bit chosen in ACR/LR or Photoshop "Mode" that is a totally different concept which has more to do with adding/interpolating up bit levels that aren't in the original 14 bit Raw file as a way to perform smoother looking edits to broad gradations like blue skies to reduce posterization with regard to editing within an 8 bit video preview of the display.

16 bit in your editor has more to do with previews where as 12/14 bit has more to do with actual data processing at the source of the camera's electronics.
Title: Re: Dynamic Range vs bit depth
Post by: Jack Hogan on April 27, 2013, 08:36:02 am
Ignoring band-limiting, filtering and such:
A perfect square-wave has only two levels. If those two levels happens to fall on quantizer levels (1 bit would do), the quantization noise would be exactly zero. SNR is infinite.

Yes, if the noise is suitably sized compare to the quantizer levels.  SNR would depend on the (shot) noise in the signal and in the (read) noise introduced by the electronics, even at 1 bit - see for instance 1-bit ADCs in audio.

Quote
Standard engineering approaches to discrete sampling assumes that signal and quantization error is uncorrelated and one or both uniformly distributed, thus the quantization "noise" can be calculated independent of signal. This leads to the SNR=6.02*#bits formula.

Even ignoring what was said above, given suitably sized sample and noise I wonder if this applies in imaging where the signal is not a slowly changing, repeating sine wave.

Quote
Dither is the willful introduction of more noise, usually with the intent of encoding more (low-frequency) levels. Noiseshaping can move this noise into frequencies where it is less annoying.

-h

Yes, although 'proper' dithering can also be provided by (read and shot) noise present in the system, without having to add any.  That seems to be the case in modern DSLRs, where input referred read noise tends to be around 1 ADU.  Witness for instance very similar engineering DR as measured at 12 and 14 bits for the same camera: (http://home.comcast.net/~NikonD70/Investigations/Sensor_Characteristics.htm)  12.9 vs 13.2 stops resp. for the D7000, for example.

Jack