Pages: 1 [2] 3 4 ... 8   Go Down

Author Topic: 16 Bit Myth  (Read 58919 times)

theguywitha645d

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 970
Re: 16 Bit Myth
« Reply #20 on: December 29, 2011, 01:07:28 am »

Snapped, what camera are you using that produces a 16-bit file?
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: 16 Bit Myth
« Reply #21 on: December 29, 2011, 01:09:36 am »

A 16 Bit file is infinitely more robust under heavy PS work ( color grading / exposure adjustment ) Many 35mm CMOS chip cameras are only 14 Bit files, even when captured RAW and processed to " 16 Bit " they are simply interpolated up to 16b Bit depth size from their native 14 Bit capability. As we all know image interpolation is basically crap.  The difference in fidelity / integrity between a 16 Bit and a 14 Bit and an 8 Bit is huge, if you understand the maths of Bit depth you will appreciate the difference in descriptive capabilities of 16 Bits of data over 14 Bits is simply HUGE ..........

Snapped,

I fear that you have snapped from reality into fantasy and it is you who does not understand the math of bit depth. It makes no sense to quantize the data into finer steps than the noise (see Emil Martinec). Those extra two bits serve largely to quantize noise. If that is your intent, fine. The per pixel of the best Phase One sensors is no better than that of the Nikon D3x. When rendering a 14 bit raw file into a 16 bit space, interpolation is not performed. The least significant bits are merely padded to zero. Look at the tonal range of the Phase One IQ 180 as measured by DXO--it is 8.52 bits screen. That camera simply can not make use of the full 16 bit range of the ADC.

For those disbelievers I seriously suggest you do some homework and a simple test,

Take any RAW file, process it to say a 3 stops underexposed, and to the wrong color balance by say 3,000 K   in both 8 Bit and 16 Bit, then correct the two files to correct exposure and color, then look at the two histograms...........  well if you still feel good giving your client the 8 Bit file I'm really happy because ultimately it means there is one more lazy shooter out there selling weak files that means my files will look comparatively better

Why would anyone want to perform manipulation of a 14 bit sensor in an 8 bit space? That is total nonsense. Why don't you perform tests comparing a 14 bit D3x to one of the older 16 bit MFDBs? The D3x would likely come out better. With a severely underexposed image, the MFDB CCD would not fare well because of its high read noise.   Also, the histogram is not the best way to judge image quality. If the gaps in a histogram are not perceptible and the levels are dithered by noise the image will be fine.

Regards,

Bill
« Last Edit: December 29, 2011, 01:15:19 am by bjanes »
Logged

Snapped

  • Newbie
  • *
  • Offline Offline
  • Posts: 9
Re: 16 Bit Myth
« Reply #22 on: December 29, 2011, 01:13:39 am »

Snapped, what camera are you using that produces a 16-bit file?

I didn't say I was using a 16 Bit camera, I'm merely supporting the opinion that a 16 Bit file is superior to an 8 Bit, or 14 Bit interpolated to 16 Bit ..........

The discussion isn't about brands or cameras, simple file integrity and robustness....
Logged

bradleygibson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 828
    • http://GibsonPhotographic.com
Re: 16 Bit Myth
« Reply #23 on: December 29, 2011, 01:28:49 am »

I'm not sure what "myth" the OP refers to exactly, but you have cameras which package 14, 12 or some other number lower than 16 into a 16-bit data structure within the file.  To use anything other than 8 or 16 bit data structures in the file complicates encoding (writing) and later decoding (reading) of the file -- doing this is not a marketing game played by manufacturers, it lowers engineering cost and improves encode/decode performance.

Some manufacturers use a limited range (0-2^n, n < 16) while others scale their values to the full range of a 16-bit data structure.  In terms of information, both procedures are legitimate, and no data is lost in either method of storing a chunk of less-than-16-bit data in  a 16-bit data structure.

But herein lies the confusion.  How to differentiate between a 16-bit data structure containing <16 bits of information from the analog-to-digital (ADC) converter, and a 16-bit data structure containing fully 16-bits of data from the ADC?

I'm with Doug--a simple phrase like "true 16-bit" is a reasonable approach for getting the point across quickly.  If the photographer is curious, s/he can ask for more information ("what do you mean by 'true', Doug", and get a complete answer).  I see nothing wrong or dishonest with this whatsoever.  On the contrary, it informs me that the person I'm talking to just might understand how this stuff works better than the average salesperson, and that is a rare thing.

As for whether the 16-bit ADC output actually contains 16 bits of true information, this is a separate question (clearly it depends on the hardware implementation.  There are many sources for noise (I won't re-open that discussion), there are many different designs for analog stage processing and there are different technologies (CMOS and CCD being the primaries), and there are many applications (some more demanding than others).)  But even if "true 16-bit" data contains noise which effectively lowers the fidelity to < 16-bits, the same can be said of 14- or 12-bit data; in general, they will contain less than 14- or 12-bits of respective signal as well, for exactly the same reasons.

So for all intents and purposes, "true 16-bits" should contain more information than 14- or 12-bits, given comparable hardware implementations.  Whether these differences make any visible difference to your work will depend on your hardware and your application.

-Brad
« Last Edit: December 29, 2011, 01:37:39 am by bradleygibson »
Logged
-Brad
 [url=http://GibsonPhotographic.com

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: 16 Bit Myth
« Reply #24 on: December 29, 2011, 02:07:38 am »

Hi,

Sorry, this is non sense. We discuss 16-bit vs. 14 bit. Photoshop can handle 8, 16 and 32 bit data. 16 bit data is actually handled internally as 15 bit.

Interpolation is not used, nor necessary. You can leave the data as it is (MSBs padded with zeroes) or shift the data 1 or 3 bits to the left and zero pad LSB). No interpolation whatsoever. Sixteen bit data just contains noise in the corresponding bytes.

The DxO DR figure in screen mode is a good measure of the utilization of the signal path. Enclosed are Pentax 645D and Hasselblad H3DII 50, both cameras have a DR around 11.3, meaning that the only utilize a 12 bits of the signal path.

It's like tacho markings that go to 120 MPH but the car only goes 90 MPH.

Adding Pentax K5 to the mettle we can see that it actually uses 13.6 bits, so it is actually utilizing it's 14 bit signal path fully.

Interestingly enough it seem that Sony's Exmoors sensors may be the first ones to utilize more than 14 bits. A 24 MP full frame sensor using current Exmoor design would probably have a DR of 14.1

Best regards
Erik


A 16 Bit file is infinitely more robust under heavy PS work ( color grading / exposure adjustment ) Many 35mm CMOS chip cameras are only 14 Bit files, even when captured RAW and processed to " 16 Bit " they are simply interpolated up to 16b Bit depth size from their native 14 Bit capability. As we all know image interpolation is basically crap.  The difference in fidelity / integrity between a 16 Bit and a 14 Bit and an 8 Bit is huge, if you understand the maths of Bit depth you will appreciate the difference in descriptive capabilities of 16 Bits of data over 14 Bits is simply HUGE ..........

For those disbelievers I seriously suggest you do some homework and a simple test,

Take any RAW file, process it to say a 3 stops underexposed, and to the wrong color balance by say 3,000 K   in both 8 Bit and 16 Bit, then correct the two files to correct exposure and color, then look at the two histograms...........  well if you still feel good giving your client the 8 Bit file I'm really happy because ultimately it means there is one more lazy shooter out there selling weak files that means my files will look comparatively better
Logged
Erik Kaffehr
 

Stefan.Steib

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 471
    • HCam - Hartblei Pro Photography solutions
Re: 16 Bit Myth
« Reply #25 on: December 29, 2011, 05:53:36 am »

As Bill has already stated the 16 bit story is just that - a myth ! There is some solid info available from scientific image processing and even the input suffers to deliver that amount of data - so it absolutely does not make any sense to think it will improve overall quality. A good explanation for this you can find here :

http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/         and following pages. from this comes the following information (page 3) and we are not even speaking about 16 bit !!!!

".....Curiously, most 14-bit cameras on the market (as of this writing) do not merit 14-bit recording. The noise is more than four levels in 14-bit units on the Nikon D3/D300, Canon 1D3/1Ds3 and 40D. The additional two bits are randomly fluctuating, since the levels are randomly fluctuating by +/- four levels or more. Twelve bits are perfectly adequate to record the image data without any loss of image quality, for any of these cameras (though the D3 comes quite close to warranting a 13th bit). A somewhat different technology is employed in Fuji cameras, whereby there are two sets of pixels of differing sensitivity. Each type of pixel has less than 12 bits of dynamic range, but the total range spanned from the top end of the less sensitive pixel to the bottom end of the more sensitive pixel is more than 13 stops, and so 14-bit recording is warranted.

A qualification is in order here -- the Nikon D3 and D300 are both capable of recording in both 12-bit and 14-bit modes. The method of recording 14-bit files on the D300 is substantively different from that for recording 12-bit files; in particular, the frame rate slows by a factor 3-4. Reading out the sensor more slowly allows it to be read more accurately, and so there may indeed by a perceptible improvement in D300 14-bit files over D300 12-bit files (specifically, less read noise, including pattern noise). That does not, however, mean that the data need be recorded at 14-bit tonal depth -- the improvement in image quality comes from the slower readout, and because the noise is still more than four 14-bit levels, the image could still be recorded in 12-bit tonal depth and be indistinguishable from the 14-bit data it was derived from...... "

Noise, Dynamic Range and Bit Depth in Digital SLRs

by Emil Martinec ©2008
last update: February 11, 2008

This whole 16 bit story is a bit like More Megahertz, More Horsepowers, and (excuse me the analogy) more centimeters..........;-)

Greetings from Munich
Stefan

Logged
Because Photography is more than Technology and "as we have done it before".

LKaven

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1060
Re: 16 Bit Myth
« Reply #26 on: December 29, 2011, 07:48:13 am »

I didn't say I was using a 16 Bit camera, I'm merely supporting the opinion that a 16 Bit file is superior to an 8 Bit, or 14 Bit interpolated to 16 Bit ..........

The discussion isn't about brands or cameras, simple file integrity and robustness....
You didn't fully read the beginning of the thread before you wrote in.  The discussion is about how many bits of image data are being recorded by the sensor.  MF camera manufacturers (and dealers) are advertising falsely that their cameras produce 16 bits, when by and large, they produce between 12 and 14 and absolutely no more.  It's actually the newer Sony Exmor sensors used in smaller sensor cameras that do a bit better at just a hair over 14.

theguywitha645d

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 970
Re: 16 Bit Myth
« Reply #27 on: December 29, 2011, 10:45:58 am »

I didn't say I was using a 16 Bit camera, I'm merely supporting the opinion that a 16 Bit file is superior to an 8 Bit, or 14 Bit interpolated to 16 Bit ..........

The discussion isn't about brands or cameras, simple file integrity and robustness....

Perhaps you should go back and read the OP. This is about using the term "true 16-bit quality" in advertising when the back is not 16-bit.
Logged

Chris_Brown

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 974
  • Smile dammit!
    • Chris Brown Photography
Re: 16 Bit Myth
« Reply #28 on: December 29, 2011, 11:48:49 am »

It's actually the newer Sony Exmor sensors used in smaller sensor cameras that do a bit better at just a hair over 14.

There are no fractions of bits (or "hairs" of bits) in any A/D converter. An analog input signal is either truncated (rounded down) or extended (rounded up) to an integer value.

If given the choice of a sensor & A/D converter system which converts its signal into an 8-bit data set or a 14-bit data set extrapolated to 16-bits, I'll take the 14-bit system. No myth there.
Logged
~ CB

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: 16 Bit Myth
« Reply #29 on: December 29, 2011, 11:52:14 am »

I am sure they are equiped with parts producing 16 bits data. So writing "true 16 bits" is no more false advertising than Epson writing "true 4800dpi" for their scanners or than YBA writing about the power that their high end amps can handle.

Well in terms of marketing hype (or flat out lies), and using the scanner analogy, we’ve seen for years and years, specs such as 4800x9600 or so forth. Knowledgeable people understand that one value is optical resolution, the other, higher, better sounding marketing driven value is interpolated resolution. But we still see the two values shown which is a bit of marketing hype IMHO. Now if indeed the optical resolution isn’t really 4800ppi in this example, someone is flat out lying!

Quote
The question is whether these 16 bits include more useful data than a 14 bits pipe. There are little evidence pointing to a yes. Am I saying that backs do not have smoother transitions? Nope. I am saying than even if they do, the true reason is not the bit depth the inaging pipe can handle, but more likely a combination of CCD sensor and the quality of the adc parts used.

Agreed! For me, the differentiation is a product that can capture only 8 bits per color or one that can produce more bits (12, 14 or 16-bits, doesn’t really matter to me). And I’d agree, suggesting a 14 bit capture alone is going to produce superior quality data than 12 bits, without looking at lots of other factors in the capture is silly. Lets see, does anyone really think a “true” 16-bit single capture device is going to give a “true” 12-bits per capture scanning back a run for its money?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

theguywitha645d

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 970
Re: 16 Bit Myth
« Reply #30 on: December 29, 2011, 12:18:43 pm »

The best defense against marketing is an educated consumer. Personally, I look on a company with suspicion if they are using fuzzy facts. I am less likely to buy from them--if they are willing to stretch the truth before they have my money, how are they going to act after they have it?
Logged

madmanchan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2115
    • Web
Re: 16 Bit Myth
« Reply #31 on: December 29, 2011, 12:43:28 pm »

The problem is the claim "true 16 bit quality" is very broad/vague.  What aspect(s) of the imaging pipeline does that refer to?

For example, many modern cameras' internal raw-to-JPEG conversion engines use 16-bit intermediate math.  This results in smoother gradations and less likelihood of artifacts during the rendering steps.  This is very different from, say, getting 16 bits of raw data from the sensor, or having the noise floor of the sensor be roughly 2^-16.

Logged
Eric Chan

LKaven

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1060
Re: 16 Bit Myth
« Reply #32 on: December 29, 2011, 12:43:52 pm »

There are no fractions of bits (or "hairs" of bits) in any A/D converter. An analog input signal is either truncated (rounded down) or extended (rounded up) to an integer value.

If given the choice of a sensor & A/D converter system which converts its signal into an 8-bit data set or a 14-bit data set extrapolated to 16-bits, I'll take the 14-bit system. No myth there.
The fractions of bits I was referring to were measurements of dynamic range, which gives us the quantity of information and tells us how many physical bits will be needed to encode it.  If a sensor delivers 13.7 bits of information (as in dynamic range), it can be accommodated in 14 physical bits.

Extrapolating to 16 bits would be pointless however.  

Bryan Conner

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 539
    • My Flickr page
Re: 16 Bit Myth
« Reply #33 on: December 29, 2011, 01:06:46 pm »

The best defense against marketing is an educated consumer. Personally, I look on a company with suspicion if they are using fuzzy facts. I am less likely to buy from them--if they are willing to stretch the truth before they have my money, how are they going to act after they have it?

I agree 100%.  It is a shame that today's business world is full of people/companies that spin partial truths...or stretch the truth all in the name of making an extra sale.  They are choosing this extra sale over an extra happily satisfied customer.  They are betting that the customer will not be educated enough to see the missing truths in their marketing speak.

Instead of only relying on the quality of their product and the testimonies of existing customers, they think that they must pretend to be a sleazy used car salesman in order to succeed. 

If the output file of the camera is not truly 16 bit, then do not call it true 16 bit...unless you fully explain what you mean by "true 16 bit".  Oh, and print your explanation in a font size that is readable...without a microscope.

I have no idea who the dealer or company the OP is referring to, so I am not directing my opinion at any particular party.
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: 16 Bit Myth
« Reply #34 on: December 29, 2011, 03:11:05 pm »

The discussion isn't about brands or cameras, simple file integrity and robustness....

Robustness vs bitdepth strongly depends on the presence of noise. As Bill pointed, if your bitdepth is higher than the level of noise you are wasting resources because you are encoding more bits than the strictly necessary, making your files larger with no advantage. Of course from a marketing point of view it can still be a good idea to fool uninformed users like you.

Left image is 8-bit, right image is 5-bit. Thanks to noise both have the same robustness against postprocessing:



BTW did you know Photoshop is a 15-bit tool? bad news, it is. It works with half the levels a genuine 16-bit file can encode. This is what any Photoshop histogram looks like when observed at 1:1 zoom. Is this a problem? no way, 15-bit is still more than enough to produce fantastic images.

Regards


« Last Edit: December 29, 2011, 03:18:08 pm by Guillermo Luijk »
Logged

LKaven

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1060
Re: 16 Bit Myth
« Reply #35 on: December 29, 2011, 03:14:08 pm »

I have no idea who the dealer or company the OP is referring to, so I am not directing my opinion at any particular party.
It seems to be pervasive.  Even the Pentax literature touts the 645D as being 16-bit.  I've been surprised at times at the number of people who are accomplished photographers and write intelligently here and who also believe that.  It's evidence that there is an interest that people keep believing in it.

fredjeang

  • Guest
Re: 16 Bit Myth
« Reply #36 on: December 29, 2011, 03:43:32 pm »

When you inject material in Nuke, it's automatically converted in 32 bits float. Like it or not, no choice.
The gamma is disabled although you see it of course in the viewer with a LUT for proper viewing.
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
Re: 16 Bit Myth
« Reply #37 on: December 29, 2011, 03:54:55 pm »

When you inject material in Nuke, it's automatically converted in 32 bits float. Like it or not, no choice.
The gamma is disabled although you see it of course in the viewer with a LUT for proper viewing.
The IEEE32 mantissa is still at least 23 bits, so no need to worry about it hurting the resolution of your raw data ... even if  some future camera does deliver 16 significant bits of signal information.
Logged

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Re: 16 Bit Myth
« Reply #38 on: December 29, 2011, 04:06:45 pm »

BTW did you know Photoshop is a 15-bit tool? bad news, it is.

Actually, it's 15 bits plus one level...done for algorithmic processing reasons. And since there really isn't a real life source of full 16 bit images, that's all the precision Photoshop needs.
Logged

fotometria gr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 568
    • www.fotometria.gr
Re: 16 Bit Myth
« Reply #39 on: December 29, 2011, 04:31:02 pm »

I don't know Mr Peterson but I can agree with the above.


+1 Theodoros. www.fotometria.gr
Logged
Pages: 1 [2] 3 4 ... 8   Go Up