Pages: [1]   Go Down

Author Topic: NEF "lossy" compression is clever  (Read 7198 times)

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
NEF "lossy" compression is clever
« on: April 22, 2008, 01:42:54 pm »

Noise in digital signal recording places an upper bound on how finely one may usefully digitize a noisy analog signal.  One example of this is 12-bit vs 14-bit tonal depth -- current DSLR's with 14-bit capability have noise of more than four raw levels, so that the last two bits of digital encoding are random noise; the image could have been recorded at 12-bit tonal depth without loss of image quality.

The presence of noise masks tonal transitions -- one can't detect subtle changes of tonality of plus or minus one raw level, when the recorded signal plus noise is randomly jumping around by plus or minus four levels.  Smooth can't be smoother than the random background.

NEF "lossy" compression appears to use this fact to our advantage.  A uniformly illuminated patch of sensor will have a photon count which is roughly the same for each pixel.  There are inherent fluctuations in the photon counts of the pixels, however, that are characteristically of order the square root of the number of photons.  That is, if the average photon count is 10000, there will be fluctuations from pixel to pixel of as much as sqrt[10000]=100 photons in the sample.  Suppose each increase by one in the raw level corresponds to counting ten more photons; then noise for this signal is 100/10=10 raw levels.  The linear encoding of the raw signal wastes most of the raw levels.

In shadows, it's a different story.  Suppose our average signal is 100 photons; then the photon fluctuations are sqrt[100]=10 photons, which translates to +/- one raw level.  At low signal level, none of the raw levels are "wasted" in digitizing the noise.  

Ideally, what one would want is an algorithm for thinning the level spacing at high signal, while keeping it intact for low signal, all the while keeping the level spacing below the noise level for any given signal (to avoid posterization).  NEF "lossy" compression uses a lookup table to do just that, mapping raw levels 0-4095 (for 12-bit raw) into compressed values in such a way that there is no compression in shadows, but increasing thinning of levels for highlights, according to the square root relation between photon noise and signal.  Here is a plot of the lookup table values (this one has 683 compressed levels, the compression varies from camera to camera depending on the relation between raw levels and photon counts):

 )
Logged
emil

papa v2.0

  • Full Member
  • ***
  • Offline Offline
  • Posts: 206
NEF "lossy" compression is clever
« Reply #1 on: April 23, 2008, 01:58:21 pm »

Very interesting, thanks for that.
One question what causes the 'inherent fluctuations in the photon counts of the pixels'.
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
NEF "lossy" compression is clever
« Reply #2 on: April 23, 2008, 03:20:19 pm »

Quote
Very interesting, thanks for that.
One question what causes the 'inherent fluctuations in the photon counts of the pixels'.
[{POST_SNAPBACK}][/a]

Anything that comes in a stream of discrete entities (photons, radioactive decays, raindrops, incoming calls at a call center, passing cars on the interstate  etc) will have an average arrival rate of the stuff, and there will be fluctuations in the arrival rate as well.  For instance, if you've ever listened to a Geiger counter registering radioactivity, you might have noted that the counts are not in a steady stream but fluctuate up and down around some average.   The fluctuations are inherent in the statistical laws (Poisson statistics, go to [a href=\"http://en.wikipedia.org/wiki/Poisson_noise]this Wikipedia article[/url] if you are interested) governing many processes.  A main characteristic of the statistics is that the size of the fluctuations in the count of the entities goes as the square root of the average number of entities counted.  This is why I said that the fluctuations were inherent in the photon counting.
« Last Edit: April 23, 2008, 03:21:13 pm by ejmartin »
Logged
emil

Tony Beach

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 452
    • http://imageevent.com/tonybeach/twelveimages
NEF "lossy" compression is clever
« Reply #3 on: April 23, 2008, 04:22:14 pm »

Quote
...if 14-bit lossy compressed NEF files were as small as 12-bit ones, people might catch on to the fact that 14-bit is a gimmick  )
[a href=\"index.php?act=findpost&pid=191253\"][{POST_SNAPBACK}][/a]

Are you saying that any detected differences are self-delusion?
Logged

bernie west

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 323
    • Wild Photo Australia
NEF "lossy" compression is clever
« Reply #4 on: April 23, 2008, 06:52:23 pm »

Quote
Very interesting, thanks for that.
One question what causes the 'inherent fluctuations in the photon counts of the pixels'.
[{POST_SNAPBACK}][/a]


Quote
Anything that comes in a stream of discrete entities (photons, radioactive decays, raindrops, incoming calls at a call center, passing cars on the interstate  etc) will have an average arrival rate of the stuff, and there will be fluctuations in the arrival rate as well.  For instance, if you've ever listened to a Geiger counter registering radioactivity, you might have noted that the counts are not in a steady stream but fluctuate up and down around some average.   The fluctuations are inherent in the statistical laws (Poisson statistics, go to [a href=\"http://en.wikipedia.org/wiki/Poisson_noise]this Wikipedia article[/url] if you are interested) governing many processes.  A main characteristic of the statistics is that the size of the fluctuations in the count of the entities goes as the square root of the average number of entities counted.  This is why I said that the fluctuations were inherent in the photon counting.
[a href=\"index.php?act=findpost&pid=191454\"][{POST_SNAPBACK}][/a]

I believe it's called 'Shot noise'.  Is that right?
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
NEF "lossy" compression is clever
« Reply #5 on: April 23, 2008, 07:00:36 pm »

Quote
Are you saying that any detected differences are self-delusion?
[a href=\"index.php?act=findpost&pid=191461\"][{POST_SNAPBACK}][/a]

No current 14-bit file need be recorded at 14-bit tonal depth, for the same reason that makes 'lossy' NEF compression work -- noise dithers tonal transitions, making too fine a level quantization wasteful and superfluous.  All current 14-bit cameras have noise levels well over four raw levels, so they could record the data in 12-bit encoding without any loss of image information.  The last two bits of all current 14-bit files are essentially random noise.

That said, there have been some reports that D300 14-bit files have less banding/pattern noise.  I wouldn't necessarily discount these reports; the D300 14-bit readout is done differently (it is 3-4 times slower, for instance), and those differences could creep into the 12 bits of image info that are above the noise level.  Those 14-bit files still have noise of over four raw levels, and could be recorded at 12-bit tonal depth without any loss of tonal information.  From a marketing standpoint, I'm not sure that's a winner, however -- a feature to reduce banding noise that most people never see, but that slows down the camera by a factor 3-4?  But call it 14-bit tonality, and people are all over it.

There is also another phenomenon at work.  The newer generation of cameras are a substantial improvement over the previous generation in many respects -- better more efficient sensors, less pattern noise, etc.  Again most people are not technically proficient enough to understand the issues, and so latch on to marketing-speak that touts 14-bit tonal depth as the bee's knees.  The images look better, for other reasons, yet they attribute those improvements to 14-bit recording.
Logged
emil

Tony Beach

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 452
    • http://imageevent.com/tonybeach/twelveimages
NEF "lossy" compression is clever
« Reply #6 on: April 23, 2008, 08:51:20 pm »

Okay, so what you are saying is that there may be advantages to slowing the D300 down ala the "14 bit mode", but that the advantages are coming from the more deliberate reading of the sensor data which results in an increased shutter lag and slower fps and not from saving that data as 14 bit files (which you assert is a "marketing gimmick").  Now correct me if I'm misinterpreting your assertion here, but you are also basically saying that Lossless Compression is capable of capturing all of this marginally improved data with 12 bits depth.

Implications of this would be that file sizes are artificially increased to 14 bit depth because after all, the primary competition to the D300 is the 40D which also touts 14 bit "capability".  Nikon could be honest and simply save as 12 bits and call it "High Quality Mode", but that would of course mean their faster fps and shorter shutter lag mode would become "Lower Quality" even if it were called "Higher Speed Mode".

Since there are marginal advantages (very marginal), and since I personally want to squeeze all I can from the camera I have, when I can I will continue to use 14 bit mode.  On the other hand, I have not hesitated to switch to 12 bit mode when I want  faster than a blink of an eye shutter lag and more fps.
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
NEF "lossy" compression is clever
« Reply #7 on: April 24, 2008, 11:07:58 am »

Quote
I believe it's called 'Shot noise'.  Is that right?
[a href=\"index.php?act=findpost&pid=191494\"][{POST_SNAPBACK}][/a]

Yes.
Logged
emil
Pages: [1]   Go Up