Luminous Landscape Forum

Equipment & Techniques => Digital Cameras & Shooting Techniques => Topic started by: david distefano on January 17, 2014, 10:59:48 pm

Title: 16 bit dslr
Post by: david distefano on January 17, 2014, 10:59:48 pm
as i was reading old posts i came across a nov. 2007 post about the possibility of 16 bit dslr. now 7 years later does anyone see it as a possibility on cameras in the near future and with the  technology of today would it be more advantages vs. the increase in mp's.
Title: Re: 16 bit dslr
Post by: Telecaster on January 18, 2014, 01:33:34 am
I guess it's all down to the sensors. If one comes on the market with sufficient tonal gradation to merit 16-bit A/D conversion then we'll likely get 16-bit ADCs for it. It would make for good PR if nothing else.

-Dave-
Title: Re: 16 bit dslr
Post by: ErikKaffehr on January 18, 2014, 02:00:57 am
Hi,

I very much doubt the usefullness of 16 bits. Right now 16 bits mean pretty much 13 bits + three bits of noise. Lens flare is also a factor, limiting density range. There is an amount of light bouncing around in the lens.

What I think we may see is similar to the extended range feature on some Fujifilm sensors, combining high-light and low-light pixels. This perhaps can be implemented using electronic shutters. The technology would be useful to handle specular highlights. This could still be implemented with 14 bits technology.

One thing to consider is that increasing dynamic range essentially also means that we get HDR on a chip. To utilize it, there is a need for HDR tone mapping or selective processing. Ugliness is around the corner. Consider this, high DR (each bit represents 1 EV of DR with linear coding) will be able to reproduce more specular highlights and deeper shadows.

12 bits is a 1:4096 contrast range
14 bits is a 1:16384 contrast range
16 bits is a 1:65536 contrast range

A high quality screen in a reasonably dark room probably has a contrast range of 1:400 and a good photographic print is probably 1:140 (best technology used on glossy paper, matte paper? Cut it in half)

So taming a wide contrast range takes some tricks, that is the reason many hate HDR.

These two articles may offer some insight:

http://echophoto.dnsalias.net/ekr/index.php/photoarticles/63-lot-of-info-in-a-digital-image
http://echophoto.dnsalias.net/ekr/index.php/photoarticles/61-hdr-tone-mapping-on-ordinary-image

Best regards
Erik



I guess it's all down to the sensors. If one comes on the market with sufficient tonal gradation to merit 16-bit A/D conversion then we'll likely get 16-bit ADCs for it. It would make for good PR if nothing else.

-Dave-
Title: Re: 16 bit dslr
Post by: digitaldog on January 18, 2014, 11:56:09 am
It would make for good PR if nothing else.
Agreed, at least for the kinds of work done in these parts. For other capture (scientific?) maybe useful.
Title: Re: 16 bit dslr
Post by: Telecaster on January 18, 2014, 05:16:24 pm
After getting the Sony A7r and setting it up with my usual (in-camera and RAW processor) flat defaults, I was taken aback by just how flat the files were. IMO ending up with a more pleasing and less HDR-ish look means throwing away tonal info. Sometimes lots of it.

I agree with Erik's comments. I'm all for having 16 bits of genuine image data if the tech can support it...but it would mean, among other things, another level of post work to contend with and more data to ultimately discard. Hmmm...

-Dave-
Title: Re:
Post by: Torbjörn Tapani on January 18, 2014, 05:36:49 pm
Lightroom started as a HDR tone mapper. It handles 32 bit files. I think we will be alright with 16 bit RAW.
Title: Re:
Post by: LKaven on January 20, 2014, 12:30:30 pm
Lightroom started as a HDR tone mapper. It handles 32 bit files. I think we will be alright with 16 bit RAW.

The use of 32 bits is for many things outside of traditional photography.  It allows the allocation of additional bits to low level signals (when those low-level signals are captured with sufficient fidelity to merit it).  It allows decisions about lighting to be made after the fact.  However these ideas did arise in the CGI world, where some practical considerations of physics did not intrude.

The first place we actually see sensors with 16-bits or more of dynamic range is in expanded /highlight/ headroom, and not in a lowered noise-floor.  Cinematographers need to be able to shoot in very high DR conditions with a graceful shoulder in highlight transition, something that film used to provide pretty nicely.
Title: Re:
Post by: bjanes on January 20, 2014, 03:21:06 pm
The use of 32 bits is for many things outside of traditional photography.  It allows the allocation of additional bits to low level signals (when those low-level signals are captured with sufficient fidelity to merit it).  It allows decisions about lighting to be made after the fact.  However these ideas did arise in the CGI world, where some practical considerations of physics did not intrude.

The first place we actually see sensors with 16-bits or more of dynamic range is in expanded /highlight/ headroom, and not in a lowered noise-floor.  Cinematographers need to be able to shoot in very high DR conditions with a graceful shoulder in highlight transition, something that film used to provide pretty nicely.

Digital capture is linear and one exposes so that the highlights are just short of clipping. The shoulder can be created in post processing, but I don't think it makes sense to separate out the highlights as a separate entity. With linear, all tones are equal (except for signal to noise, which becomes a problem in the shadows), and highlight headroom is determined by exposure.

Bill
Title: Re:
Post by: LKaven on January 20, 2014, 04:01:26 pm
Digital capture is linear and one exposes so that the highlights are just short of clipping. The shoulder can be created in post processing, but I don't think it makes sense to separate out the highlights as a separate entity. With linear, all tones are equal (except for signal to noise, which becomes a problem in the shadows), and highlight headroom is determined by exposure.

Bill, I'm thinking of sensors that have been designed to be able to capture exposures that exceed a certain reference level for maximum exposure.  If the clipping point on an ISO 100 surface would be 0dB, the sensors in question would be able to capture +3-6dB without a corresponding change in reference levels.  In other words, dynamic range is extended only at the high end without being extended at the low end. 
Title: Re: 16 bit dslr
Post by: Fine_Art on January 20, 2014, 04:09:19 pm
The biggest limitation is screens which are almost all 8bit electronic devices. High colors are dithering. Some pro grade screens for medical imaging or graphic professionals are 10 bit. Does anyone know of 14 bit screens that can be bought?

So for archival of historical works 16 bit would be useful, for most anything else you will never see a difference. Prints have much less DR than screens.
Title: Re:
Post by: Telecaster on January 20, 2014, 04:42:13 pm
Bill, I'm thinking of sensors that have been designed to be able to capture exposures that exceed a certain reference level for maximum exposure.  If the clipping point on an ISO 100 surface would be 0dB, the sensors in question would be able to capture +3-6dB without a corresponding change in reference levels.  In other words, dynamic range is extended only at the high end without being extended at the low end. 

People often forget that sensors themselves are not digital devices. Makes sense to me to design in a sensor-level highlight shoulder. I also like the idea of photosites that can fill to a certain level, then readout while continuing to capture—maybe with multiple readouts—during the course of a single exposure.

-Dave-
Title: Re: 16 bit dslr
Post by: Vladimirovich on January 20, 2014, 04:42:43 pm
The biggest limitation is screens which are almost all 8bit electronic devices. High colors are dithering. Some pro grade screens for medical imaging or graphic professionals are 10 bit.
high end greyscale medical displays are > 10 bits, are they not ... like 12 bits (4096 shades) ?
Title: Re: 16 bit dslr
Post by: bjanes on January 20, 2014, 07:17:12 pm
The biggest limitation is screens which are almost all 8bit electronic devices. High colors are dithering. Some pro grade screens for medical imaging or graphic professionals are 10 bit. Does anyone know of 14 bit screens that can be bought?

So for archival of historical works 16 bit would be useful, for most anything else you will never see a difference. Prints have much less DR than screens.

If 16 bit capture were available, it could be used in printing as the art of printing is in tone mapping higher dynamic range to what can be printed as Karl Lang explains here (http://wwwimages.adobe.com/www.adobe.com/products/photoshop/family/prophotographer/pdfs/pscs3_renderprint.pdf). Why else would one resort to HDR imaging in difficult scenes.

Bill
Title: Re:
Post by: allegretto on January 20, 2014, 07:37:05 pm
People often forget that sensors themselves are not digital devices...

really? Do you have some reference for this? Light is digital, a photon is or is not. Sensors are buckets that fill with light and the recruitment (or lack of recruitment) of photons determines the output at that site.  So I'm not sure why you say this

However I would like to learn something here...
Title: Re:
Post by: Telecaster on January 20, 2014, 10:28:09 pm
really? Do you have some reference for this? Light is digital, a photon is or is not. Sensors are buckets that fill with light and the recruitment (or lack of recruitment) of photons determines the output at that site.  So I'm not sure why you say this

However I would like to learn something here...

By your definition film is also digital. It too captures photons.

Photosites on a sensor capture light and then read out voltages corresponding to the amount of light captured. It's important to note they do not read out photon counts. The voltages are then quantized from continuous values into discrete ones by an analog-to-digital converter. It's at this point that the captured data becomes digital, not before.

Now in the end everything is digital. Quantum mechanics implies that space itself is like a grid, with an absolute minimum distance between any two points, rather than the smooth fabric of general relativity. But when it comes to "digital" cameras the digital part refers specifically to post-capture data quantization.

-Dave-
Title: Re:
Post by: jrsforums on January 21, 2014, 12:05:24 am
really? Do you have some reference for this? Light is digital, a photon is or is not. Sensors are buckets that fill with light and the recruitment (or lack of recruitment) of photons determines the output at that site.  So I'm not sure why you say this

However I would like to learn something here...

Why do you think they need ADC's....analog to digital converters
Title: Re:
Post by: allegretto on January 21, 2014, 12:28:46 am
By your definition film is also digital. It too captures photons.

Photosites on a sensor capture light and then read out voltages corresponding to the amount of light captured. It's important to note they do not read out photon counts. The voltages are then quantized from continuous values into discrete ones by an analog-to-digital converter. It's at this point that the captured data becomes digital, not before.

Now in the end everything is digital. Quantum mechanics implies that space itself is like a grid, with an absolute minimum distance between any two points, rather than the smooth fabric of general relativity. But when it comes to "digital" cameras the digital part refers specifically to post-capture data quantization.

-Dave-

yes, the quantization of light is always digital, and that's my point. Even in your retina.

we may read voltages due to the the way they are designed, but to me this is simply a first-pass conversion of light from it's native digital to an analog expression that requires an A>D conversion downstream. But that's simply a transformation that is transformed back. The collection of photons, which is what film or a sensor do, is quite digital. That is, that the analog output is a product of how the circuit is designed. not how sensors or Nature works.

If one wishes to argue that film is more analog since it is a graded absorption in the emulsion I guess that might fly, but the sensor has a specific digital "count" for each and every photon it collects. Voltages are simply a secondary conversion, not the process of absorption within the pixel. It's just the way engineers have (to this point) designed the process, not the actual event. It is quite possible that in the future the process will become digital all the way through and A>D convertors will go the way of... emulsions.

More properly, the sensor is certainly a digital device, that reads out in an analog fashion due to design (so it does a D>A conversion by design which later needs an A>D conversion). Let's take your point; that the capture is analog until the A>D conversion. This does not follow for me since a photon hits the sensor, a digital event if ever there was one, and the information is purely digital at that point. Not sure how to see it any other way.

BTW, quantum mechanics does "respect" the plank length. This should not be confused with thinking that no other interval or system is at play. It's just the theory most consistent with what we think we know… for now… But just as surely as newtonian thinking had to be "refined" by Einsteinian Physics, QM may just be a rest stop too.

Yes, in the end, all information appears to be digital (who knew?). Thus it survives transformation. At least that's a current theory too. For a beautiful analysis of this you might have already read Wolfram's "A New Kind of Science". If you have not seen this amazing work it's something that is well worth anyone who considers themselves a Scientist should be exposed to.
Title: Re: 16 bit dslr
Post by: hjulenissen on January 21, 2014, 05:04:23 am
The biggest limitation is screens which are almost all 8bit electronic devices. High colors are dithering. Some pro grade screens for medical imaging or graphic professionals are 10 bit. Does anyone know of 14 bit screens that can be bought?

So for archival of historical works 16 bit would be useful, for most anything else you will never see a difference. Prints have much less DR than screens.
My several years old Dell screen is (AFAIK) capable of reproducing 10 bits through the use of so-called "FRC". If it is achieved via temporal or spatial dithering is not so critical as the question of if it gives a real and relevant benefit.

I think that the real limit is software, operating systems and content. Same as color-management, I guess ("8-bit sRGB works, so why fix it?").

For capture, there is no limit to what a Photoshop operator might want to do to her image file. Thus, any increase in the information about the original scene might be of relevance to some.


For 16 bits (per channel) to have any relevance besides powering marketing to normal photography, there has to be a real, measurable, visible benefit over other (presumably cheaper) alternatives like 14 bits, at least in some scenarios. I don't think that still image sensors are quite there yet?

-h
Title: Re: 16 bit DSLR
Post by: 01af on January 21, 2014, 08:58:50 am
For many years, 12 bit/channel used to be the state of the art. For a few years now, sensors with 14 bit/channel are becoming increasingly prevalent; especially among high-end cameras they are fairly common by now. I am not aware of any current DSLR cameras with 15 or 16 bit/channel. However technical progress is inevitable, so I guess they will appear some day eventually ... albeit not anytime soon.
Title: Re: 16 bit dslr
Post by: Fine_Art on January 21, 2014, 01:02:31 pm
high end greyscale medical displays are > 10 bits, are they not ... like 12 bits (4096 shades) ?

No idea, I never looked into greyscale screens. BTW I looked into sourcing screens several years ago so maybe things have improved recently.
Title: Re: 16 bit dslr
Post by: Fine_Art on January 21, 2014, 01:16:56 pm
If 16 bit capture were available, it could be used in printing as the art of printing is in tone mapping higher dynamic range to what can be printed as Karl Lang explains here (http://wwwimages.adobe.com/www.adobe.com/products/photoshop/family/prophotographer/pdfs/pscs3_renderprint.pdf). Why else would one resort to HDR imaging in difficult scenes.

Bill

Thanks for the link.

I don't follow, if you already need to tonemap 12 or 14 bit raw to fit into the ~400:1 contrast of the print, why do you need more bits? In theory, I agree more bits give you smoother data for tone mapping, but isn't the current level more than enough for prints?
 
My HDTV has contrast about 4000:1 with gamut (mapped in my colorspider) between SRGB and ARGB. So vivid colors are truly vivid as you would expect. I think the big screen for viewing images is where we need more bits. Get us to 12 or 14 bits with dithering on 12000:1 contrast on 8K then lets look at the framerate destroying 16 bit capture.
Title: Re:
Post by: Telecaster on January 21, 2014, 02:46:29 pm
If one wishes to argue that film is more analog since it is a graded absorption in the emulsion I guess that might fly, but the sensor has a specific digital "count" for each and every photon it collects. Voltages are simply a secondary conversion, not the process of absorption within the pixel. It's just the way engineers have (to this point) designed the process, not the actual event. It is quite possible that in the future the process will become digital all the way through and A>D convertors will go the way of... emulsions.

No, the freeing up of electrons (and thus the creation of voltage) is integral to how sensors work. It's not an added-on layer. Future technology may work differently, but this is the tech we've got now. To reiterate: "digital" in the context of current digital cameras refers to the quantization of voltages. The ultimate nature of reality (if there even is an ultimate nature) is way beyond this scope.   ;)

Quote
BTW, quantum mechanics does "respect" the plank length. This should not be confused with thinking that no other interval or system is at play. It's just the theory most consistent with what we think we know… for now… But just as surely as newtonian thinking had to be "refined" by Einsteinian Physics, QM may just be a rest stop too.

Yes, I suspect this is true (QM being the current frontier rather than a final one.)

-Dave-
Title: Re: 16 bit dslr
Post by: ErikKaffehr on January 21, 2014, 02:52:11 pm
Hi,

You can code a wide dynamic range in few bits, that is with Arriflex are doing with S-log. Contrast figures given for screens and projectors are often exaggerated.

Best regards
Erik

Thanks for the link.

I don't follow, if you already need to tonemap 12 or 14 bit raw to fit into the ~400:1 contrast of the print, why do you need more bits? In theory, I agree more bits give you smoother data for tone mapping, but isn't the current level more than enough for prints?
 
My HDTV has contrast about 4000:1 with gamut (mapped in my colorspider) between SRGB and ARGB. So vivid colors are truly vivid as you would expect. I think the big screen for viewing images is where we need more bits. Get us to 12 or 14 bits with dithering on 12000:1 contrast on 8K then lets look at the framerate destroying 16 bit capture.
Title: Re: 16 bit dslr
Post by: Fine_Art on January 21, 2014, 03:31:38 pm
Hi,

You can code a wide dynamic range in few bits, that is with Arriflex are doing with S-log. Contrast figures given for screens and projectors are often exaggerated.

Best regards
Erik


Projectors definitely. There is one bulb. Mostly true for screens as well. I think the difference is on the big HDTVs where they can use multiple light sources for the "dynamic lighting" item in the menu. You are probably still right that they are best case in a black test chamber, not real world. They can however get much brighter than typical computer screens.
Title: Re: 16 bit dslr
Post by: hjulenissen on January 21, 2014, 03:52:14 pm
I don't follow, if you already need to tonemap 12 or 14 bit raw to fit into the ~400:1 contrast of the print, why do you need more bits? In theory, I agree more bits give you smoother data for tone mapping, but isn't the current level more than enough for prints?
>
My HDTV has contrast about 4000:1 with gamut (mapped in my colorspider) between SRGB and ARGB. So vivid colors are truly vivid as you would expect. I think the big screen for viewing images is where we need more bits. Get us to 12 or 14 bits with dithering on 12000:1 contrast on 8K then lets look at the framerate destroying 16 bit capture.
If tonemapping allows one to render a large DR scene using low DR tech (while attempting to minimize artifacts), would not increasing the capture DR (which necessitates higher number of bits at some point) be a worthwhile goal for some situations?

-h
Title: Re:
Post by: allegretto on January 21, 2014, 04:41:42 pm
any guy who can imagine a Fat Finger to deal with shutter slap can't be a bad guy so I'll stop the war.

voltage drops are proportional to the detection of photons, and either a photon is there, or it is not. The initiation is digital event; 0 or 1

But i'll give you the last word, any word you want

cheers...  ;D


No, the freeing up of electrons (and thus the creation of voltage) is integral to how sensors work. It's not an added-on layer. Future technology may work differently, but this is the tech we've got now. To reiterate: "digital" in the context of current digital cameras refers to the quantization of voltages. The ultimate nature of reality (if there even is an ultimate nature) is way beyond this scope.   ;)

Yes, I suspect this is true (QM being the current frontier rather than a final one.)

-Dave-
Title: Re:
Post by: LKaven on January 21, 2014, 04:47:27 pm
yes, the quantization of light is always digital, and that's my point. Even in your retina.

Though you would be right to suggest that there are reasons to treat many natural processes as embodying computation over discrete domains, there is a practical matter.  Using a von Neumann computer to access natural information requires a step that converts natural information into a form that can be loaded into electronic storage registers.  
Title: Re:
Post by: allegretto on January 21, 2014, 06:40:09 pm
well, I know nothing about von Neumann or Harvard designs... :)

but I do know retinas. They are well and true digital, in every sense of the term. Nothing analog about it. That's the work of upper neurons grasping at understanding.


Though you would be right to suggest that there are reasons to treat many natural processes as embodying computation over discrete domains, there is a practical matter.  Using a von Neumann computer to access natural information requires a step that converts natural information into a form that can be loaded into electronic storage registers.  
Title: Re:
Post by: Bart_van_der_Wolf on January 21, 2014, 07:00:40 pm
well, I know nothing about von Neumann or Harvard designs... :)

but I do know retinas. They are well and true digital, in every sense of the term. Nothing analog about it.

Hi,

First time I've heard that (I have been told it's an analog photochemical process). Do you have any credible references for that opinion?

Cheers,
Bart
Title: Re:
Post by: allegretto on January 21, 2014, 11:00:20 pm
you've been told incorrectly. It is a very much digital process wherein a photon strikes a photoreceptor which yields a cascade of events that reverses membrane potentials. Again recruitment determines the intensity of the sense of the event.

Credible references abound;

http://www.vetmed.vt.edu/education/curriculum/vm8054/eye/rhodopsn.htm - a straightforward review of the chemistry, initiated by a photon. Not an analog process, a conformational change (cis-trans) that is again a 0 or 1. It is cis or trans, not cistrans.

http://www.d.umn.edu/~jfitzake/Lectures/DMED/Vision/Retina/VisualCycle.html - an expansion demonstrating the fact that different colors are sensed by the energy of the given photon

http://www.d.umn.edu/~jfitzake/Lectures/DMED/Vision/Retina/Photoreceptors.html - which further expands upon sensitivity in terms of how many photons it takes to cause a transduced signal to be produced.

All digital processes.

Not you personally, but where is this insistence that light is somehow analog coming from? Light is discrete packets of energy who's speed determines wavelength. When a sensitive receptor is struck by its complimentary photon, the animal's brain sees light. These facts have been pretty much known and agreed to for quite some time. No controversy I'm aware of. Does the characterization of this as an analog process make folks feel better?


Hi,

First time I've heard that (I have been told it's an analog photochemical process). Do you have any credible references for that opinion?

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: Petrus on January 22, 2014, 01:18:13 am
For photography and projection the light has to pass through a lens. That is actually the bottleneck in trying to achieve better DR, not sensors (we are getting better) or digital calculations (which could be of almost any accuracy already). There is always some dust and internal reflections* which spread the highlights to the full image area turning the blackest black dark grey. This point has been reached already with 14 bit sensors.

For projection better DR could be possibly had with laser projection, just an idea… That would need a lot of computing power and expensive "projector", though.

*) look at the projector lens of just about any projector while it is running...
Title: Re:
Post by: hjulenissen on January 22, 2014, 01:45:18 am
Though you would be right to suggest that there are reasons to treat many natural processes as embodying computation over discrete domains, there is a practical matter.  Using a von Neumann computer to access natural information requires a step that converts natural information into a form that can be loaded into electronic storage registers.  
Some might claim that digital really is analog (since digital information must usually be transmitted or stored using traditional "analog" means). Others might claim that analog is really digital (since the world tends to be granular, at least at the levels of physics where I have some comprehension).

I think such discussions are besides the point. An analog transmission/storage encodes a signal as an "analog", i.e. there is a direct and obvious correspondence between the source variation (i.e. scene brightness) and some modulation of a property (i.e. "bright scene leads to large voltage"). In a digital transmission/storage, the correspondence between source variation and the encoding property is far less obvious (i.e. "scene brightness of X leads to a voltage pulstrain of [+1 -1 -1 +1]"). Some practical consequence of this is that (for digital):
1. Storage/transmission capacity (bandwidth/SNR) can usually be better exploited
2. "errors" can be detected and corrected, provided that they are within bounds
3. Complicated logic is needed, and delay is added

-h
Title: Re:
Post by: Vladimirovich on January 22, 2014, 01:53:34 am
http://www.vetmed.vt.edu/education/curriculum/vm8054/eye/rhodopsn.htm - a straightforward review of the chemistry
thank you, I totally forgot to eat my daily carrot !
Title: Re:
Post by: allegretto on January 22, 2014, 06:15:50 am
suppose I had a circuit and if one drops a molecule of water on a detector, the gate closes and a potential gets conducted lighting a bulb. Is that digital or analog? Say I had two such circuits... say I had x^y such circuits... A or D?

See, I think your answer is provided in your explanation, but it may not be the one you're thinking of. You note that "analog" is an affect of interplay (intermodulation) of signals to produce the effect. In digital, there is a direct connection between he action and the affect (need not be 1:1 however)

However I think you're right that at some level, everything is digital and it's just a matter of summation and dependent pathways.

Final test... men are digital, women are analog... ;)


Some might claim that digital really is analog (since digital information must usually be transmitted or stored using traditional "analog" means). Others might claim that analog is really digital (since the world tends to be granular, at least at the levels of physics where I have some comprehension).

I think such discussions are besides the point. An analog transmission/storage encodes a signal as an "analog", i.e. there is a direct and obvious correspondence between the source variation (i.e. scene brightness) and some modulation of a property (i.e. "bright scene leads to large voltage"). In a digital transmission/storage, the correspondence between source variation and the encoding property is far less obvious (i.e. "scene brightness of X leads to a voltage pulstrain of [+1 -1 -1 +1]"). Some practical consequence of this is that (for digital):
1. Storage/transmission capacity (bandwidth/SNR) can usually be better exploited
2. "errors" can be detected and corrected, provided that they are within bounds
3. Complicated logic is needed, and delay is added

-h
Title: Re:
Post by: Bart_van_der_Wolf on January 22, 2014, 06:40:48 am
you've been told incorrectly. It is a very much digital process wherein a photon strikes a photoreceptor which yields a cascade of events that reverses membrane potentials.

Hi,

You seem to be suggesting that because a photon either strikes, or it doesn't strike, makes it a digital process. If that (probability/statistics) is your train of reasoning, then there would be no analog reality.

Yet, the arrival time of photons is random (Poisson distributed probability), the effect of a photon striking may be inhibited (e.g. lateral inhibition) by other chemicals, and neurotransmitters use electrotonic conduction which produces a constant flow of electric current (http://www.vetmed.vt.edu/education/curriculum/vm8054/eye/CNSPROC.HTM) (see "Graded Response and Release of Neurotransmitters") along the membrane. This turns a variable stream of discrete photons into a very analog process.

Quote
Credible references abound;

Maybe it is the interpretation of that information that is a bit 'debatable'?

Quote
http://www.vetmed.vt.edu/education/curriculum/vm8054/eye/rhodopsn.htm - a straightforward review of the chemistry, initiated by a photon. Not an analog process, a conformational change (cis-trans) that is again a 0 or 1. It is cis or trans, not cistrans.

That's your interpretation. What I read is a "The disintegration of rhodopsin into retinal and scotopsin is progressive", and "The eventual result is release", and "metarhodopsin II, is the agent that ultimately effects the change in the rod membrane's charge". Also, ""Under conditions of impinging light, [...],
Quote
the flow of sodium ions
into the rod outer segment is slowed or stopped. Also "One photon, the minimum quantity of light possible, will cause the movement of millions of sodium ions, because of the catalytic nature of the enzymes and the large surface area provided for them to work".

Not a very binary process at all. Sure, one photon makes a difference (in a Rod), but not always exactly the same, because the result is an analog process flow.

Quote
http://www.d.umn.edu/~jfitzake/Lectures/DMED/Vision/Retina/VisualCycle.html - an expansion demonstrating the fact that different colors are sensed by the energy of the given photon

Another gradual transition process "the activation of rhodopsin during phototransduction isomerizes 11-cis-retinal to the all-trans form, which dissociates from the opsin in a series of steps called "bleaching" ". Dark adaptation also causes a continuously variable sensitivity, not digital at all.

Quote
http://www.d.umn.edu/~jfitzake/Lectures/DMED/Vision/Retina/Photoreceptors.html - which further expands upon sensitivity in terms of how many photons it takes to cause a transduced signal to be produced.

Indeed, Rods have more photopigment and have a high (single photon) sensitivity but lower temporal resolution, more signal integration. Cones have lower sensitivity, higher temporal resolution and less signal integration. Both are not digital at all.

Quote
All digital processes.


Really? It seems to be quite the opposite.

Quote
Not you personally, but where is this insistence that light is somehow analog coming from?
 Light is discrete packets of energy who's speed determines wavelength.

Actually, the speed of light is constant (in vacuum), Photons can be considered to exhibit both wavelength, and energy characteristics. That's known as the wave-particle duality (http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality) of light. Waves are not digital, and even energy particles fluctuate as (valence) electrons are knocked in and out of the (outer) shell of atoms.

Cheers,
Bart
Title: Re:
Post by: hjulenissen on January 22, 2014, 07:19:00 am
suppose I had a circuit and if one drops a molecule of water on a detector, the gate closes and a potential gets conducted lighting a bulb. Is that digital or analog? Say I had two such circuits... say I had x^y such circuits... A or D?
My point: who cares? I believe that everything "digital" in communication or circuits can be explained by "analog" fundamentals. A cpu can be analysed using R and C and L and transistors and wires. It just gets very messy and you quickly get to the stage where it is impossible to calculate the result numerically (or comprehend anything intuitively).

"Digital" is (in my view) best seen as some sort of man-made abstraction on top of "analog" that lets us do things that would otherwise be very impractical. Whether that "analog" thing at the bottom is really continous or granular is for the majority of people and cases utterly irrelevant. Even if it was truly continous, everything outside of a mathematicians whiteboard is troubled by measurement noise, meaning that observations of a signal or system contains some uncertainty (be it "noise" or "quantization" or "photons" or whatever).

-h
Title: Re:
Post by: Bart_van_der_Wolf on January 22, 2014, 07:30:40 am
"Digital" is (in my view) best seen as some sort of man-made abstraction on top of "analog" that lets us do things that would otherwise be very impractical. Whether that "analog" thing at the bottom is really continous or granular is for the majority of people and cases utterly irrelevant.

Indeed. It's the inevitable integration over time (unless we travel at the speed of light) that makes random discrete phenomena analog, continuously variable. Discrete quantization then becomes a tool, a useful abstraction for statistical calculation.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: Fine_Art on January 22, 2014, 12:31:58 pm
If tonemapping allows one to render a large DR scene using low DR tech (while attempting to minimize artifacts), would not increasing the capture DR (which necessitates higher number of bits at some point) be a worthwhile goal for some situations?

-h

Yes, if it is necessary to make the whole capture in one shot. That is a completely different argument(position). It has nothing to do with fitting the data properly in a print. You could use 3 8 bit shots to gather the data. So the question is do you shoot when DR is high as in midday? Most people try to shoot when the light is directional which shows off surface features. If you need to shoot the sun in a sunset shot for example, you do not expect the sun not to clip. Many classic prints, for example Ansel Adams, used a lot of dodge/ burn to increase the contrasty look further wiping out DR detail in the print from what film could do.

We can generalize to the idea that it is always best to capture as much detail/ data as the scene allows. My question is can someone see in a print the difference between tonemapped from 12 bit vs from 16 bit? 
Title: Re: 16 bit dslr
Post by: hjulenissen on January 22, 2014, 01:21:32 pm
Yes, if it is necessary to make the whole capture in one shot. That is a completely different argument(position).
When talking about sensor DR I am naturally talking about making the capture in one shot. Multi-capture DR enhancement (as used in "HDR" photography) is an ergonomic and artistic kludge that we only use because of limitations in the single-capture DR.
Quote
So the question is do you shoot when DR is high as in midday? Most people try to shoot when the light is directional which shows off surface features. If you need to shoot the sun in a sunset shot for example, you do not expect the sun not to clip. Many classic prints, for example Ansel Adams, used a lot of dodge/ burn to increase the contrasty look further wiping out DR detail in the print from what film could do.

We can generalize to the idea that it is always best to capture as much detail/ data as the scene allows. My question is can someone see in a print the difference between tonemapped from 12 bit vs from 16 bit? 
If they can see the difference between a multi-exposure synthesized image that has been quantized to 12 vs 16 bits prior to tonemapping, then we can assume that it is possible to see the difference between a "true" 12-bit and 16-bit sensor.

-h
Title: Re:
Post by: allegretto on January 22, 2014, 01:59:14 pm
oh my, much here. I can see you'd like to debate this, but there is no need

First, either a membrane potential exists or it has collapsed. That there are different ranges of sensitivity in different cell populations due to supressor or excitatory activity is a fact. But the truth is that at some point the potential collapses and conduction occurs or it does not. Again, about as binary as possible. This is true of a great many biologic processes. Recruitment of responses is what gives the event it's "analog" appearance. let's take an analog (pun intended) in an electronic circuit. Suppose we have a circuit with an instruction to not conduct until the gate potential (call it what you like) reaches 3 mv. At 3 mv it closes and sends a blip. Would you argue that this is a digital or analog event? Certainly differing blips being conducted at different potentials could summate and transmit an analog signal, but would you argue that the single circuit is analog?

If you look macro enough, the process can be analog-appearing. But the potential is the key to the information being conducted or not. And if not conducted, it is not detected. Doesn't mean it does not exist, it is simply below threshold. In your rhodopsin example you introduce the variable of "recovery" which certainly affects discharge. But it is the discharge that determines detection of information, not the amount of rhodopsin.

As far as the concept of wave/particle duality and "c", well perhaps this will help; http://www.living-universe.com/home/7-Photon-Energy.html. Photons do have mass and are not waves. Duality is an explanation, but not a reality.

Likewise, wavelength/speed is a by-product of SR and not really what I thought we were talking about. However, "c" is constant in it's own frame of reference, but not to an outside observer. In laboratories workers have "slowed" photons to extremely low apparent velocities, but no one told the photon so it happily goes on not realizing what some lab rats think, such is SR. Further, your observation about electron states and "color" are addressed as well. Note the photon retains its characteristics even when interacting with an atom. We have learned a great deal about electrons since 1905. They are not what we used to think. but again, this was not my point.

So one more time, will note that a retinal cell either fires or it doesn't. If it doesn't fire, the visual cortex receives no signal from it and it remains dark. This is information however (being dark), so it reveals the true binary nature of a retinal cell.I consider that digital. You may not I suppose, but that isn't how many of us view it.

e
Hi,

You seem to be suggesting that because a photon either strikes, or it doesn't strike, makes it a digital process. If that (probability/statistics) is your train of reasoning, then there would be no analog reality.

Yet, the arrival time of photons is random (Poisson distributed probability), the effect of a photon striking may be inhibited (e.g. lateral inhibition) by other chemicals, and neurotransmitters use electrotonic conduction which produces a constant flow of electric current (http://www.vetmed.vt.edu/education/curriculum/vm8054/eye/CNSPROC.HTM) (see "Graded Response and Release of Neurotransmitters") along the membrane. This turns a variable stream of discrete photons into a very analog process.

Maybe it is the interpretation of that information that is a bit 'debatable'?

That's your interpretation. What I read is a "The disintegration of rhodopsin into retinal and scotopsin is progressive", and "The eventual result is release", and "metarhodopsin II, is the agent that ultimately effects the change in the rod membrane's charge". Also, ""Under conditions of impinging light, [...],  into the rod outer segment is slowed or stopped. Also "One photon, the minimum quantity of light possible, will cause the movement of millions of sodium ions, because of the catalytic nature of the enzymes and the large surface area provided for them to work".

Not a very binary process at all. Sure, one photon makes a difference (in a Rod), but not always exactly the same, because the result is an analog process flow.

Another gradual transition process "the activation of rhodopsin during phototransduction isomerizes 11-cis-retinal to the all-trans form, which dissociates from the opsin in a series of steps called "bleaching" ". Dark adaptation also causes a continuously variable sensitivity, not digital at all.

Indeed, Rods have more photopigment and have a high (single photon) sensitivity but lower temporal resolution, more signal integration. Cones have lower sensitivity, higher temporal resolution and less signal integration. Both are not digital at all.
 

Really? It seems to be quite the opposite.

Actually, the speed of light is constant (in vacuum), Photons can be considered to exhibit both wavelength, and energy characteristics. That's known as the wave-particle duality (http://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality) of light. Waves are not digital, and even energy particles fluctuate as (valence) electrons are knocked in and out of the (outer) shell of atoms.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: Fine_Art on January 22, 2014, 02:29:31 pm
When talking about sensor DR I am naturally talking about making the capture in one shot. Multi-capture DR enhancement (as used in "HDR" photography) is an ergonomic and artistic kludge that we only use because of limitations in the single-capture DR.If they can see the difference between a multi-exposure synthesized image that has been quantized to 12 vs 16 bits prior to tonemapping, then we can assume that it is possible to see the difference between a "true" 12-bit and 16-bit sensor.

-h

It is only a kludge if it works poorly.

I dont think anyone can tell the difference in print. Eric and other sources have given us side by sides of 16bit MFDBs with 14 bit (DPR for example) or 12 bit (Sony Alpha) cameras. You cant reliably tell which is which on screens with much higher DR than the print.

My Sony A55 does a damn good job of 3 shot HDR to 8 bit jpg in camera. I have compared the 8 bit jpg to 3 raws HDRed to 16 bits in other software. The 8 bit jpg tone mapping is very good. The main difference is the color style compared to the output from lots of work with other software. Or pixel peeping detail which is bled out in print.

Provide a blind source test to people at your local photo club in prints. How many can tell the print from the 8 bit jpg to the print from the 14bit raw if you match the colors correctly?
Title: Re:
Post by: Telecaster on January 22, 2014, 05:29:05 pm
any guy who can imagine a Fat Finger to deal with shutter slap can't be a bad guy so I'll stop the war.

There's no war. The same words, digital and analog in this case, can refer to somewhat different things in different contexts...as has been noted here by other folks. Jumping from A/D conversion to an inquiry into the ultimate nature of things is just way outside the scope of my initial post!

Fat Fingers come in very handy not only to reduce unwanted resonances but also to help balance slightly body-heavy instruments.   :D

-Dave-
Title: Re:
Post by: Bart_van_der_Wolf on January 22, 2014, 07:07:45 pm
oh my, much here. I can see you'd like to debate this, but there is no need

First, either a membrane potential exists or it has collapsed. That there are different ranges of sensitivity in different cell populations due to supressor or excitatory activity is a fact. But the truth is that at some point the potential collapses and conduction occurs or it does not. Again, about as binary as possible.

Hi,

Sorry to disappoint you. I'm not sure why, but it seems like you are trying to push a pet peeve, or maybe (to put it friendly) you just need to brush up on your knowledge on the subject.

Since you are unlikely to take my word for it, let me point you to a document that explains the somewhat more complex phototransduction process. It was the second Google link I found, there may be better sources, but it seems to offer a reasonable explanation. Here it is: http://arxiv.org/ftp/quant-ph/papers/0208/0208053.pdf (http://arxiv.org/ftp/quant-ph/papers/0208/0208053.pdf) (see section 2.1.1  Photon transduction and signal amplification).

In short, for those who are not that interested in reading all of it, the relevant part states:
Quote
If this molecule absorbs a photon, it undergoes photoisomerization forming straight chain version, known as all-trans-retinal. Alltrans-retinal unleashes a series of conformational changes in the protein opsin fragment producing metarhodopsin II, which is the activated form of rhodopsin.
Most of the conformational changes occur in less than a millisecond, but the last transformation, from metarhodopsin II to metarhodopsin III, requires several minutes to accomplish.

"Requires several minutes to accomplish" doesn't sound like a digital process, does it. In addition, the section concludes with:
Quote
Thus the first essential feature of the retina is that it amplifies the photon signal and converts it into macroscopic electric currents.

Macroscopic electric currents are not exactly digital either, are they?

Quote
As far as the concept of wave/particle duality and "c", well perhaps this will help; http://www.living-universe.com/home/7-Photon-Energy.html. Photons do have mass and are not waves. Duality is an explanation, but not a reality.

If you are suggesting that diffraction, due to wavefronts that are distrurbed by edges of the aperture do not exist, are not a reality, then we won't have to take diffraction seriously anymore. Or do we?

Besides, it might be wiser to use a reference with a bit more credibility than :
"James Carter began thinking about and developing alternative theories of physics as a teenager. Around 1968, he developed the principle of Gravitational Expansion that explained gravity to be the opposite of Einstein’s General Relativity."

Sure, Einstein got it all wrong ...

Quote
So one more time, ...

Thanks but no thanks. I've got better things to do, but I'm always open to good quality information.

Cheers,
Bart
Title: Re:
Post by: ErikKaffehr on January 23, 2014, 01:27:17 am
Hi,

1000000 photons/s analog but one photon/s digital?

Best regards
Erik




Hi,

Sorry to disappoint you. I'm not sure why, but it seems like you are trying to push a pet peeve, or maybe (to put it friendly) you just need to brush up on your knowledge on the subject.

Since you are unlikely to take my word for it, let me point you to a document that explains the somewhat more complex phototransduction process. It was the second Google link I found, there may be better sources, but it seems to offer a reasonable explanation. Here it is: http://arxiv.org/ftp/quant-ph/papers/0208/0208053.pdf (http://arxiv.org/ftp/quant-ph/papers/0208/0208053.pdf) (see section 2.1.1  Photon transduction and signal amplification).

In short, for those who are not that interested in reading all of it, the relevant part states:
"Requires several minutes to accomplish" doesn't sound like a digital process, does it. In addition, the section concludes with:
Macroscopic electric currents are not exactly digital either, are they?

If you are suggesting that diffraction, due to wavefronts that are distrurbed by edges of the aperture do not exist, are not a reality, then we won't have to take diffraction seriously anymore. Or do we?

Besides, it might be wiser to use a reference with a bit more credibility than :
"James Carter began thinking about and developing alternative theories of physics as a teenager. Around 1968, he developed the principle of Gravitational Expansion that explained gravity to be the opposite of Einstein’s General Relativity."

Sure, Einstein got it all wrong ...

Thanks but no thanks. I've got better things to do, but I'm always open to good quality information.

Cheers,
Bart
Title: Re:
Post by: Bart_van_der_Wolf on January 23, 2014, 03:23:55 am
1000000 photons/s analog but one photon/s digital?

Hi Erik,

One photon is not digital. One airplane is not digital, and a thousand airplanes do not all of the sudden become analog either.

The possible signal amplification (a single photon can lead to hydrolysis of approximately 10^5 cGMP molecules/s) of the Rods in particular is impressive, but it is a constant stream of molecules:
Quote
Thus unlike ordinary neurons, which release transmitter from the synaptic button as a discrete event in response to an action potential, in photoreceptors there is a continuous release of neurotransmitter from the synapses, even in the dark.

Cheers,
Bart

Title: Re: 16 bit dslr
Post by: hjulenissen on January 23, 2014, 04:10:59 am
It is only a kludge if it works poorly.
Having to take 3 exposures, assuring that there is no movement in-between, running special software to synthesize them into one _is_ a kludge compared to recording the same info in a single shot.

-h
Title: Re: 16 bit dslr
Post by: hjulenissen on January 23, 2014, 04:13:59 am
Provide a blind source test to people at your local photo club in prints. How many can tell the print from the 8 bit jpg to the print from the 14bit raw if you match the colors correctly?
I suspect that if I design the test in order to prove a point (and not for photographic value), 100% will be able to distinguish them.

You are of course right that for 99% of the people and 99% of the cases, 8 bits (with gamma) seems to be sufficient.

-h
Title: Re: 16 bit dslr
Post by: allegretto on January 23, 2014, 04:28:18 am
Ah your reference;

let’s take this from the summary; “...Moreover, we explicitly stress on the fact that due to existent amplification of the signal produced by each photon

“signal produced by each photon”...does that sound analog?

this ; If this molecule absorbs a photon, it undergoes photoisomerization forming straight chain version, known as all-trans-retinal. All- trans-retinal unleashes a series of conformational changes in the protein opsin fragment producing metarhodopsin II, which is the activated form of rhodopsin.

“if a molecule absorbs a photon... (something happens)” sound analog?

this; . Elaborate experiments have shown that the human is capable of consciously detecting the absorption of a single photon by a rod

sound analog?

this; Therefore when opened the CNG channels tend to depolarize the cell.9 If the photoreceptor cell is illuminated,
9 When the CGN channel is open, the resting membrane potential of ‐40 mV is dragged towards the reversal potential of the CGN channel, which is 0 mV. Thus the photoreceptor is depolarized.
cytoplasmic cGMP concentration decreases and disrupts the ionic current through the CNG channels, thereby hyperpolarizing the cell.

sound analog?

this; Thus the first essential feature of the retina is that it amplifies the photon signal and converts it into macroscopic electric currents.

amplification is not detection, no? It’s not even the signal. But even here it says “photon signal”, Singular. So an event causes an effect, sound analog?

I’m afraid that your reference is talking about something completely different, but it is useful in that it describes the process well and we are back to a photon exciting a cell.
The best way to look at this I suppose (signal modulation in the retina) is that;
ISO
Shutter Speed
Aperture
photosite array
all impact the picture in some way. But the picture itself is the result of photons striking the receptor. Not the ISO setting.

Diffraction and all? No, you referred to the “duality” of photons. I merely am pointing out that “duality” is an outmoded concept based on what we now know to be the structure and properties of photons

And as far as airplanes (now a long reach), you’re talking about something entirely different. Airplanes are not photons hitting photoreceptors. An airplane may land or crash, that’s binary and information however. You seem to not understand the distinction if that’s your example.

Now about your expressions. You are the one who initially went point by point, hammer and tong about what I was saying. And beginning with your last post you’ve even brought references in which you apparently think buttress your case (actually they say exactly what I said, one photon hitting one cell is sufficient information). And now you claim it’s a peeve for me...??? Please...

You obviously know a great deal more about the nuts and bolts of photography than I do. That’s cool. But either you haven’t really read what I said, or you haven’t read the reference or you don’t understand what is being said.

You’ve taken on the role of a bit of a typical Bulletin Board Bully (BBB) here, which you have also taken on before. in my limited experience, every board has one or two. That’s OK, you know a lot and I typically appreciate your input when you are not jumping to conclusions about people personally, or mis-guidedly assessing motives.
Right now you’re trying for reversal (“push a pet peeve...” that is BS, It doesn’t matter. But I have learned more about you), and make it appear that it matters a great deal to me that you accept what I say. Yet you’re the one initially going off point by point. I have followed the first time since it was harmless, but now you’re getting personal, which BBB’s often do. It really doesn’t matter to me whether you accept it or not, nor does it matter if you go on insisting photons are sometimes waves. Not here to convince you, especially when your notions are frozen into place. And oh, the further we go the more we realize that Einstein was right about a great many things, but not everything. That you reject the authors’ overarching theory is fine, but it doesn’t invalidate over a century of getting to know photons within new and more accurate concepts of the Universe.

Going to break off this discussion now since you’ve taken it to the absurd and are being far too personal. But you’re right, one of us needs to better understand the subject.






Title: Re: 16 bit dslr
Post by: LKaven on January 23, 2014, 12:04:43 pm
This started with a casual remark made about analog/digital distinctions as commonly used in digital camera discussions.  Right or wrong, they were casual remarks, intending to draw familiar distinctions.  In point of fact, the semantics of the terms "analog" and "digital" bring in extensive argumentation in naturalistic semantics.  The distinction is notoriously vague, and one wonders whether /either term/ has the kind of clarity needed for science.  But for purposes of discussion here, we recognize that the commonplace use of the "analog-to-digital converter" creates a practical distinction between the way signals are handled in two clearly different domains of engineering.
Title: Re: 16 bit dslr
Post by: hjulenissen on January 23, 2014, 12:54:29 pm
This started with a casual remark made about analog/digital distinctions as commonly used in digital camera discussions.  Right or wrong, they were casual remarks, intending to draw familiar distinctions.  In point of fact, the semantics of the terms "analog" and "digital" bring in extensive argumentation in naturalistic semantics.  The distinction is notoriously vague, and one wonders whether /either term/ has the kind of clarity needed for science.  But for purposes of discussion here, we recognize that the commonplace use of the "analog-to-digital converter" creates a practical distinction between the way signals are handled in two clearly different domains of engineering.
So the thread was originally about how many bits are needed in camera ADC/raw files? Assuming a linear conversion, would not this be the number of bits necessary to encode the saturating end of the sensel, and still having quantization noise that is essentially hidden(/dithered) by the photon/readnoise?

-h
Title: Re: 16 bit dslr
Post by: Bart_van_der_Wolf on January 23, 2014, 12:59:13 pm
This started with a casual remark made about analog/digital distinctions as commonly used in digital camera discussions.  Right or wrong, they were casual remarks, intending to draw familiar distinctions.  In point of fact, the semantics of the terms "analog" and "digital" bring in extensive argumentation in naturalistic semantics.  The distinction is notoriously vague, and one wonders whether /either term/ has the kind of clarity needed for science.

Hi Luke,

The term is actually well defined (http://en.wikipedia.org/wiki/Discrete-time_signal#Digital_signals) in signal processing circles:
"A digital signal is a discrete-time signal for which not only the time but also the amplitude has been made discrete."

Discrete-time can be substituted for discrete-space or position, e.g. when a sampling frequency in fractional seconds in time is replaced by fractional distance in space, depending on the particular use (e.g. sound signals with amplitude over time such as frequency, versus image signals such as luminance over pixel positions).

Discussions become 'difficult' when people deviate from the accepted definition. For those interested in more background information, I can recommend this website (http://101science.com/dsp.htm) as a starting point.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: LKaven on January 23, 2014, 01:22:00 pm
Hi Luke,

The term is actually well defined (http://en.wikipedia.org/wiki/Discrete-time_signal#Digital_signals) in signal processing circles:
"A digital signal is a discrete-time signal for which not only the time but also the amplitude has been made discrete."

Discrete-time can be substituted for discrete-space or position, e.g. when a sampling frequency in fractional seconds in time is replaced by fractional distance in space, depending on the particular use (e.g. sound signals with amplitude over time such as frequency, versus image signals such as luminance over pixel positions).

Discussions become 'difficult' when people deviate from the accepted definition. For those interested in more background information, I can recommend this website (http://101science.com/dsp.htm) as a starting point.

Of course Bart.  As I acknowledged, the term gains a clear distinction in engineering domains.  If the term is defined by stipulation, it still serves its practical purpose.  In naturalistic semantics however, the terms are subjected to more dialectical stress, and there is a deeper scientific distinction to be made (and defended, if possible).  I think this pretty much sums up the entire digression. 

Now about those cameras...
Title: Re:
Post by: ErikKaffehr on January 23, 2014, 02:19:19 pm
Hi,

It is digital in the sense that it is there or not there. If you illuminate a photomultiplicator to very weak light and plot the output on an oscilloscope you will see discrete pulses for each photon. So I don't think individual photons are analogue.

Best regards
Erik

Hi Erik,

One photon is not digital. One airplane is not digital, and a thousand airplanes do not all of the sudden become analog either.

The possible signal amplification (a single photon can lead to hydrolysis of approximately 10^5 cGMP molecules/s) of the Rods in particular is impressive, but it is a constant stream of molecules:
Cheers,
Bart


Title: Re:
Post by: Bart_van_der_Wolf on January 23, 2014, 02:51:32 pm
It is digital in the sense that it is there or not there. If you illuminate a photomultiplicator to very weak light and plot the output on an oscilloscope you will see discrete pulses for each photon. So I don't think individual photons are analogue.

Hi Erik,

And that is an excellent example where sticking to the definition makes things easier to discuss. The photon is only discrete in number, there are indeed no fractional photons, but it is continuous in time (arrival time is random). Hence it does not comply with the signal theory definition of digital signal and thus it is analog.

See how much easier it becomes to discuss things when we look for both discrete-quantity/amplitude and discrete-time dimension?

We can digitize (quantize) the photons 'being there' or not, by defining a discrete-time interval, and that helps in defining the benefits and drawbacks of 16-bit DSLR components, and features like noise.

Cheers,
Bart
Title: Re:
Post by: ErikKaffehr on January 23, 2014, 04:01:55 pm
Hi Bart,

Thanks for taking time to explain, your effort is much appreciated!

Best regards
Erik

Hi Erik,

And that is an excellent example where sticking to the definition makes things easier to discuss. The photon is only discrete in number, there are indeed no fractional photons, but it is continuous in time (arrival time is random). Hence it does not comply with the signal theory definition of digital signal and thus it is analog.

See how much easier it becomes to discuss things when we look for both discrete-quantity/amplitude and discrete-time dimension?

We can digitize (quantize) the photons 'being there' or not, by defining a discrete-time interval, and that helps in defining the benefits and drawbacks of 16-bit DSLR components, and features like noise.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: bjanes on January 23, 2014, 10:27:34 pm
I suspect that if I design the test in order to prove a point (and not for photographic value), 100% will be able to distinguish them.

You are of course right that for 99% of the people and 99% of the cases, 8 bits (with gamma) seems to be sufficient.

-h

8 bits with gamma may be sufficient for many, but it is not optimal for high quality output.

The Nikon D800e is one of the better performing sensors evaluated by DXO, and they report a screen DR of 13.23 stops. This is for a SNR of 0 dB or 1:1. A SNR of 1 is not photographically useful. One may derive DRs for other SNRs from the DXO full SNR curve. The DR at SNR 1.0 can be read from the curve. The saturation at SNR 0 dB (1:1) is 0.01%. Dividing this value into 100% gives a DR of 10,000:1, or 13.29 stops (log2 10,000 = 13.29), very close to the reported value of 13.23.

One can derive the DRs for SNR 18 dB, 12 dB, and 6 dB (SNRs 8:1, 4:1 and 2:1 respectively) by interpolation. The interpolated saturations are 0.144%, 0.055%, and the corresponding DRs are 9.4, 10.8, and 12.2 stops respectively. If one considers a noise floor of 8:1 as acceptable for photographic use, the DR of the D800e is 9.4 stops.

(http://bjanes.smugmug.com/Photography/8-bit-DR/i-MzwHcc7/0/O/D800_SNR.png)

How many bits are needed to encode this DR? An 8 bit sRGB file can encode a total DR of 11.7 stops. The maximum encoded value is 255 and the minimum value is 1, but one must convert to linear (scene referred) to obtain the scene luminance ratios. The sRGB value of 1 is converted to linear by dividing by 12.92 (see inverse gamma) which yields 0.077399. Thus, the DR is 1/0.07739 or 3295 or 11.7 stops. However, effective DR can be limited by posterization. Human vision can detect a difference of luminance of about 1%, and the difference in luminance levels should be kept below this amount for the highest quality output. The steps between levels in gamma encoded data are variable and are largest at the low end. For example, the difference between the sRGB levels of 1 and 2 is 100%. In his article (http://www.anyhere.com/gward/hdrenc/hdr_encodings.html) on HDR encoding, Greg Ward uses a cutoff value of 5% difference in levels in determining the DR of 8 bit sRGB, since this amount of error may not be noticeable in the darkest levels. This 5% cutoff occurs at a sRGB value of approximately 44. This converts to 6.4 linear. The effective DR for high quality output at this cutoff is therefore 255/6.4 or 5.3 stops. In log10 notation this is 1.6 orders of magnitude, as shown in Table 1 of the article.

The 5.3 stops of high quality output obtained with an sRGB JPEG is not optimal for the D800e.

Bill
Title: Re: 16 bit dslr
Post by: hjulenissen on January 24, 2014, 05:14:01 am
An 8 bit sRGB file can encode a total DR of 11.7 stops. The maximum encoded value is 255 and the minimum value is 1
Why not a minimum value of 0? An 8-bit number can represent [0...255] (inclusive) or [1...256] or some other range.
Quote
The 5.3 stops of high quality output obtained with an sRGB JPEG is not optimal for the D800e.

Bill
Hence I avoided claiming that 8 bits was optimal for anything, just that it appears to be "acceptable" for a large percentage of people and cases. Poynton claims (I believe) that 8 bits with ideal gamma allows for transparence (no banding) at contrast ratios of 50:1. In practice it seems that this is a worst-case estimate (or people accept slight banding).

-h
Title: Re: 16 bit dslr
Post by: bjanes on January 24, 2014, 07:49:23 am
Why not a minimum value of 0? An 8-bit number can represent [0...255] (inclusive) or [1...256] or some other range.
AFAIK 0..255 is universally used for 8 bit output in Photoshop and other applications. However, I doubt that anyone would notice and difference in output is one used 1..256, since the difference between 0 and 1 is not perceptible on screen or in print. However, the use of 0 would give an infinite proportional step between 0 and 1, and the DR would be infinite. Not very useful :-[.

Hence I avoided claiming that 8 bits was optimal for anything, just that it appears to be "acceptable" for a large percentage of people and cases. Poynton claims (I believe) that 8 bits with ideal gamma allows for transparence (no banding) at contrast ratios of 50:1. In practice it seems that this is a worst-case estimate (or people accept slight banding).
50:1 is pretty low contrast with a DR of 5.64 stops. This is within the high quality encoding range described in my prior post. A typical print has a contrast ratio of ~250:1, and screen contrast is even higher. Another authority, Norman Koren (http://www.normankoren.com/digital_tonality.html), does state: "But image quality in an 8/24-bit file will be adequate, though just barely, if the exposure is correct and little editing is required. This is achievable in studio environments, but less often when using "natural" (i.e., uncontrolled) light.", supportive of Poynton and your view.

I do use high quality JPEG for prints at my local Costco (profile sRGB or their custom profile) and have not noted any banding. However, sRGB is insufficient to record all colors captured by the camera and I use 16 bit ProphotoRGB when printing with my Epson 3880. AdobeRGB is a bit better, but still insufficient. My camera does not have a wide gamut output space for JPEG, so I must shoot raw if I want to render into ProphotoRGB. 8 bit encoding may work with ProphotoRGB, but most authorities recommend 16 bit.

Mileage varies!

Bill


Title: Re:
Post by: allegretto on January 24, 2014, 04:55:25 pm
Hello,

Yes, but beyond photons the resting potential of the photoreceptor, the physiologic issue is likewise digital. Either it is -40mV, and silent (0) or it is ~+5mV and firing (1). There is no "in-between". Engineering or Physiology, discrete values are the essence of "digital". Temporal and spacial issues are what give it the "analog" experience.

Hi,

It is digital in the sense that it is there or not there. If you illuminate a photomultiplicator to very weak light and plot the output on an oscilloscope you will see discrete pulses for each photon. So I don't think individual photons are analogue.

Best regards
Erik

Title: Re:
Post by: xpatUSA on January 28, 2014, 01:59:36 pm

BTW, quantum mechanics does "respect" the plank length.

. . . as did the pirates of yore . . .

(sorry, just couldn't resist it, please don't get mad)

cheers,
Title: Re:
Post by: hjulenissen on January 28, 2014, 02:32:48 pm
Engineering or Physiology, discrete values are the essence of "digital".
As an engineer (I cannot really speak for physiology), I'd disagree. Discrete values are not (in my humble opinion) the _essence_ of "digital". As this discussion has shown, the physical signal can have discrete and continous aspects. The physical theory can lead us to this or that conclusion (depending on how many physics classes one attended). The essence of digital lies in the flexible interpretation/mangling of the signal that is transmitted or received. This is what enables packetized networks to "jump" across umpteen links and large distances with your jpegs presented in all their glory (while an "analog" transmission would probably have all kinds of visible degradations)

-h
Title: Re: 16 bit dslr
Post by: allegretto on January 29, 2014, 12:32:31 am
In Physiology we refer to "digital" as information that is composed of discrete values which are binary. It turns out that your nervous system is exactly that. A neuron's responses can be modulated by inhibitory or excitatory factors which alter the thresholds for their gates to open, but they still exhibit the given values (depending upon the tissue type) it just takes more or less stimulus to trigger due to the modulators.

If that seems analog to you then I have no argument, call it as you wish. Interestingly, modulation is such a complex factor in the CNS that it is a whole "Science" by itself. The issue from before was quite different and there is no need to re-hash.

xpat - cool here!
Title: Re: 16 bit dslr
Post by: david distefano on January 30, 2014, 09:52:25 pm
as the op of this conversation i got lost about page 3. all i was really asking was the possibility of a 16 bit sensor on a dslr a better option than say the rumored nikon d4x at 54mp. i also read on this conversation that 16 bit really isn't 16. some of you have said that 2 to 3 bits is, if i read correctly, noise. do you also lose the same amount on a 14 bit sensor such as the d800/e?
Title: Re: 16 bit dslr
Post by: Telecaster on January 30, 2014, 10:22:58 pm
My take, minus the thread digressions: more photosites have benefits, and so does greater usable dynamic range. I'm not convinced that with current sensors 16-bit A/D conversion offers any real-world benefit. But near-future tech is IMO likely to redraw the battlefield, so to speak. So hang tight.   :)

-Dave-
Title: Re: 16 bit dslr
Post by: LKaven on January 30, 2014, 10:51:05 pm
as the op of this conversation i got lost about page 3. all i was really asking was the possibility of a 16 bit sensor on a dslr a better option than say the rumored nikon d4x at 54mp. i also read on this conversation that 16 bit really isn't 16. some of you have said that 2 to 3 bits is, if i read correctly, noise. do you also lose the same amount on a 14 bit sensor such as the d800/e?

The width of the A-D converter, 14 or 16 bit, has mostly to do with who is making the converter.  The Sony chips do conversion on the sensor, and 14 bits are exactly as many as are needed.  Sourcing A-D converters on the open market, you will likely find more 16-bit converters, since 16 bits is a convenient 2-byte quantity.  

MFDB makers have been sourcing these commodity 16-bit converters all along.  The words "16 bits" are then inserted in the specs and marketing literature as-if to suggest that the camera produces 16-bits of dynamic range per pixel.  This meme is hard to kill.

The D800/e makes good use of its 14 bits in per pixel response.  Most CCDs, due to read noise, yield only about 12 and a fraction bits per pixel.  It's when you downsample 80M of these pixels to about 12M to print, that you gain some bit precision in the process.  
Title: Re: 16 bit dslr
Post by: hjulenissen on January 31, 2014, 01:46:46 am
all i was really asking was the possibility of a 16 bit sensor on a dslr a better option than say the rumored nikon d4x at 54mp.
That would be an apple to oranges comparision.
Quote
i also read on this conversation that 16 bit really isn't 16. some of you have said that 2 to 3 bits is, if i read correctly, noise. do you also lose the same amount on a 14 bit sensor such as the d800/e?
This is a bit like judging the top speed of a car by seeing how far the speedometer will go. You can be pretty sure that a car won't go any faster than the highest rating on its speedometer, but cannot be certain that it will be able to reach the highest rating.

-h
Title: Re: 16 bit dslr
Post by: Bart_van_der_Wolf on January 31, 2014, 04:47:39 am
as the op of this conversation i got lost about page 3. all i was really asking was the possibility of a 16 bit sensor on a dslr a better option than say the rumored nikon d4x at 54mp. i also read on this conversation that 16 bit really isn't 16. some of you have said that 2 to 3 bits is, if i read correctly, noise.

Hi David,

That is correct, and the noise is coming from several sources.

Suppose that a sensel could collect the charge of 60000 converted photons then, by the random arrival rate of photons, photon shot noise will be sqrt(60000) = 244.95 photons (0.408%) at the high exposure end. That's not much, but will become noticeable when we start reducing the level of exposure. Cutting the exposure in half (or doubling the ISO ) will leave a maximum of 30000 photons with a shot noise of sqrt(30000) = 173.21 photons (0.577%).

By the time we have reached an exposure level of 14 stops below maximum (= deep shadows) there will be only 3.66 photons of exposure recorded on average, with a photon shot noise of 1.91 (52.26%). At 15 stops below maximum the signal will become 1.83 photons with a noise of 1.35 (73.9%). That is only caused by the random nature of light particles as they build up the exposure over continuous time. Note: discrete particles, continuous time, hence light is not a digital but an analog signal. It becomes digital after quantization.

You can imagine that any additional electronic noise from circuits that process (e.g. read-out and dark current) that already noisy signal, and slight amplification differences between the per sensel transistors (PRNU) will immediately wreak havoc on the already marginal signal/noise ratio. It will be very difficult to exceed 14 bits of real signal.

Only by reducing that additional (read, PRNU, etc.) noise (e.g. by super cooling and very high component quality), or increasing the storage capacity (well depth) and exposure time, can we approach between 15 and 16-bit signal accuracy per sensel. With the shrinking sensel sizes, I think that 14 to 14.5-bit signal accuracy will be the practical limit, for which 16-bit components would be required (14-bit components will be too close to the best signal level with no room to spare).

Quote
do you also lose the same amount on a 14 bit sensor such as the d800/e?

The D800/D800E can approach 14-bits of actual signal with a well depth of some 45000 electrons, but it clips part of the Read noise before writing the Raw data. So the maximum signal is something like 15.46-bit, minus 1.4-2 bits of noise.

Do note that this is all before demosaicing, gamma, and tone curve adjustment, which may amplify or reduce the visibility of noise in the final image.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: ErikKaffehr on January 31, 2014, 02:53:17 pm
Hi,

The number of bits is just a question of the number of electron charges a pixel (called FWC)  can hold and the noise reading those pixels. If you divide the FWC with readout noise you get the dynamic range of the sensor as large number. You need a number of bits to represent that number, that would be log(N) / log(2).

So let's say FWC is 60000 electrons, readout noise is 15 electron charges, than the dynamic range would be 60000/15 = 4000. Log(4000) / log(2) = 11.97 so 12 bits would be adequate for that sensor.

Now, let's reduce readout noise 3 electron charges, so we would have 60000 / 3 = 20000. Log (20000) / log(2) = 14.3, so that sensor would need 15 bit's.

Increasing MP makes the sensors smaller half the area -> half the FWC. If you increase MP the DR per pixel will be less, so the mythical 54 MP camera would perhaps have 30000 in FWC and 3 electron charges of readout noise. 30000 / 3 = 10000, log (10000) / log (2) = 13.3 so it would do just fine with 14 bits.

Going the other direction, making pixels larger could increase DR per pixel. Fat pixel could perhaps hold 150000 electron charges, but if those sensors have high readout noise they still don't need that many bits.

So 14 bits are quite safe now.

Best regards
Erik


as the op of this conversation i got lost about page 3. all i was really asking was the possibility of a 16 bit sensor on a dslr a better option than say the rumored nikon d4x at 54mp. i also read on this conversation that 16 bit really isn't 16. some of you have said that 2 to 3 bits is, if i read correctly, noise. do you also lose the same amount on a 14 bit sensor such as the d800/e?
Title: Re: 16 bit dslr
Post by: Bart_van_der_Wolf on January 31, 2014, 04:28:20 pm
So 14 bits are quite safe now.

Hi Erik,

I'd say that with almost 14-bit DR for the D800 we're just safe per sensel, with virtually no real room to spare.

However, the trend is towards smaller sensels, and that's when we'll have to consider DR per unit area, with multiple sensels per such unit area. We'll ultimately get to something like 1 micron sensel pitch ('well depth' of maybe 1000-1500 electrons each) without the need for an OLPF because diffraction and residual lens aberrations will limit the contrast beyond the Nyquist frequency enough to avoid aliasing.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: LKaven on January 31, 2014, 05:02:16 pm
Hi Erik,

I'd say that with almost 14-bit DR for the D800 we're just safe per sensel, with virtually no real room to spare.

However, the trend is towards smaller sensels, and that's when we'll have to consider DR per unit area, with multiple sensels per such unit area. We'll ultimately get to something like 1 micron sensel pitch ('well depth' of maybe 1000-1500 electrons each) without the need for an OLPF because diffraction and residual lens aberrations will limit the contrast beyond the Nyquist frequency enough to avoid aliasing.

Meanwhile, worries about Photoshop's 15-bit limit are encroaching.  I'm sure way back when they never thought we'd have real 16-bit image data, and that they could steal a bit for some reason.  But it looks like the day of reckoning may have come.
Title: Re: 16 bit dslr
Post by: bjanes on January 31, 2014, 06:57:44 pm
Hi Erik,

I'd say that with almost 14-bit DR for the D800 we're just safe per sensel, with virtually no real room to spare.

However, the trend is towards smaller sensels, and that's when we'll have to consider DR per unit area, with multiple sensels per such unit area. We'll ultimately get to something like 1 micron sensel pitch ('well depth' of maybe 1000-1500 electrons each) without the need for an OLPF because diffraction and residual lens aberrations will limit the contrast beyond the Nyquist frequency enough to avoid aliasing.

I'm glad to have 14 bit files with the D800e, which can encode the full DR of the sensor, which is about 13 EV. However, the useful photographic DR is less than the engineering ER of 13.3 EV, depending on what noise floor is used for practical photographic DR. One can derive DR at other noise floors from the DXO data as Emil explains here (http://www.luminous-landscape.com/forum/index.php?topic=42158.0).

Using this method, I derived the DR for noise floors of 0, 6, 12, and 18 db.

(http://bjanes.smugmug.com/Photography/Sensor-Analysis/D800e-Sensor/i-XDfN7Xw/0/O/D800_SNR.png)

The calculations are tabulated here. The interpolation method is that in the referenced Wikipedia article on log interpolation.

(http://bjanes.smugmug.com/Photography/Sensor-Analysis/D800e-Sensor/i-5xgMJPJ/0/O/D800SNRb.png)

The dynamic range derived from Imatest are shown here.

(http://bjanes.smugmug.com/Photography/Sensor-Analysis/D800e-Sensor/i-qBS5cFF/0/O/01_PV2010_ExpNe0_5_Step_2.png)

So 14 bits gives a bit of safety for the D800e and likely would be sufficient for the new PhaseOne IQ250. The 16 bit files for the older PhaseOne sensors are merely marketing hype, or more charitably, related to the use of off the shelf 16 bit ADCs used for those CCD cameras that have only about 12 stops of DR.

Bill
Title: Re: 16 bit dslr
Post by: LKaven on January 31, 2014, 08:31:44 pm
The Emil Martinec files, which should be digested at this point, contain some of the best writing anywhere on the internet on the subject.  Haven't seen Emil around for quite a while.  I'm sure he's busy working out cutting edge string theory instead.
Title: Re: 16 bit dslr
Post by: Fine_Art on January 31, 2014, 08:36:52 pm
Good to know Bill, there are probably a lot of people that got sucked in by believing they were missing something from 14 to 16 bits. Probably the only real 16bit (of real data) CCD cameras available to consumers are the tiny cooled chips for astro-photo. $1000 for a 640x480 peltier cooled chip, maybe greyscale. Kept 30oC below ambient temp, they may have the noise floor low enough.
Title: Re: 16 bit dslr
Post by: david distefano on January 31, 2014, 11:34:30 pm
[Good to know Bill, there are probably a lot of people that got sucked in by believing they were missing something from 14 to 16 bits.]

this has been interesting. i for one believed i was losing out on shades of color since i sold my mfdb and have been using the d800. ok i have this question, (since i am still contemplating a mfdb to use with my arca swiss) if nikon or canon or sony come out with their 54mp cameras and paired with the zeiss otus lenses and printing to a maximum size of 24x30 what am i really going to gain with a mfdb. (excluding the 60 and 80mp backs, too much money.) a p45 used is going for about $7,000 which is about my limit unless lotto hits. i shoot images for the pure enjoyment not as a business that would be able to write off the equipment.
Title: Re: 16 bit dslr
Post by: Vladimirovich on January 31, 2014, 11:40:31 pm
I'm sure he's busy working out cutting edge string theory instead.
that thing rumored to have 26bits !!!
Title: Re: 16 bit dslr
Post by: ErikKaffehr on February 01, 2014, 01:06:29 am
Hi Bart,

I don't think so. My guess is that we are going to pixel sizes down to 3 microns on "real cameras". It is possible to make smaller pixels, but I would guess that lenses set practical limits. Small pixels are possible but the approach of a larger sensor with reasonable size pixels seem more rational to me.

Best regards
Erik




Hi Erik,

I'd say that with almost 14-bit DR for the D800 we're just safe per sensel, with virtually no real room to spare.

However, the trend is towards smaller sensels, and that's when we'll have to consider DR per unit area, with multiple sensels per such unit area. We'll ultimately get to something like 1 micron sensel pitch ('well depth' of maybe 1000-1500 electrons each) without the need for an OLPF because diffraction and residual lens aberrations will limit the contrast beyond the Nyquist frequency enough to avoid aliasing.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: LKaven on February 01, 2014, 02:44:54 am
I don't think so. My guess is that we are going to pixel sizes down to 3 microns on "real cameras". It is possible to make smaller pixels, but I would guess that lenses set practical limits. Small pixels are possible but the approach of a larger sensor with reasonable size pixels seem more rational to me.

Fossum sees APS-c sensors topping out at about 100MP or so.  I haven't computed the pixel size for that, but he's on record as saying that pixels will continue down to the 900nm range.  At under 2um the APS-c sensors will benefit from BSI. 

Of course then there's Fossum's idea for a JOT sensor.  One photon per pixel, sampled at 250-1000 times per second across the entire sensor surface.  Bitslice image planes will get integrated through a sigma (summation) unit, possibly after getting shifted to account for subject movement.
Title: Re: 16 bit dslr
Post by: Bart_van_der_Wolf on February 01, 2014, 05:05:57 am
Fossum sees APS-c sensors topping out at about 100MP or so.  I haven't computed the pixel size for that, but he's on record as saying that pixels will continue down to the 900nm range.  At under 2um the APS-c sensors will benefit from BSI.

Hi Luke,

Indeed. For those not familiar with Dr. Eric Fossum (http://en.wikipedia.org/wiki/Eric_Fossum), he's the inventor of the CMOS image sensor.

Quote
Of course then there's Fossum's idea for a JOT sensor.  One photon per pixel, sampled at 250-1000 times per second across the entire sensor surface.  Bitslice image planes will get integrated through a sigma (summation) unit, possibly after getting shifted to account for subject movement.

Yes, there are several innovative ideas being tossed around. Here (http://www.imagesensors.org/Past%20Workshops/Past%20Workshops.htm) is a nice collection of PDF papers on the various issues and solutions. Some show examples of existing 1.1 micron pitch sensors with a well capacity of some 2700 electrons.

Cheers,
Bart
Title: Re: 16 bit dslr
Post by: thierrylegros396 on February 01, 2014, 05:41:28 am
First audio Compact Disc players (Philips 1984 if I remember) used 14 bits DA converters.
It was just the minimum required to ensure the noise will be masked in ambient noise (about 84dB SNR).

Then 16 bits converters appears (about 96dB SNR).
Then 18 bits (about 108dB), and after 20 (120dB) and so on till today.
Same for AD converters for recording the source.

What to say about that?!

It seems that 16dB is more that enough in a practical point of view, altough golden ears prefer slightly more.

I don't know if it's the same for optical sources and medias (Screen, but certainly not paper), but it's possible.

Thierry
Title: Re: 16 bit dslr
Post by: ErikKaffehr on February 01, 2014, 05:54:56 am
Hi,

Audio CD-s recorded at 16 bits. But photography is based on light and light has a noise of it's own. So 16 bit is now 12-14 bit of signal and 2-4 bits of noise.

Just to demonstrate it a bit, check Phase One web site.

IQ 250 has 14 EV dynamic range and 14 bit processing. The IQ 260 and 280 have "16-bit opticolor" and 13EV DR (that is 13 bits) and the IQ-250 has 14 EV DR (14 bits).

So the older backs have 16 bit (13 bit signal + 3 bit noise) while IQ 250 has 14 bit (14 bit signal + 0 bit noise). Just to make life easy, you can take the 14 bit from the IQ 250 and convert it to 16 bit.

Here is the 'C' code:

sixten_bit_signal = 14_bit_signal << 2 + rand() % 4;

Best regards
Erik




First audio Compact Disc players (Philips 1984 if I remember) used 14 bits DA converters.
It was just the minimum required to ensure the noise will be masked in ambient noise (about 84dB SNR).

Then 16 bits converters appears (about 96dB SNR).
Then 18 bits (about 108dB), and after 20 (120dB) and so on till today.
Same for AD converters for recording the source.

What to say about that?!

It seems that 16dB is more that enough in a practical point of view, altough golden ears prefer slightly more.

I don't know if it's the same for optical sources and medias (Screen, but certainly not paper), but it's possible.

Thierry
Title: Re: 16 bit dslr
Post by: hjulenissen on February 02, 2014, 01:19:53 am
Hi,

Audio CD-s recorded at 16 bits. But photography is based on light and light has a noise of it's own. So 16 bit is now 12-14 bit of signal and 2-4 bits of noise.
I am sure that there are granularity limits for sound waves as well. But they tend to be irrelevant because all known acoustic environments have so much noise (air condition, outside traffic,...).

Audio offers 24 bit AD and DA today, but I believe that the Effective Number Of Bits is 20 or so. Peer-reviewed blind listening tests have so far been unable to distinguish 16-bit from higher resolution formats.

-k
Title: Re: 16 bit dslr
Post by: hjulenissen on February 02, 2014, 01:27:33 am
Meanwhile, worries about Photoshop's 15-bit limit are encroaching.  I'm sure way back when they never thought we'd have real 16-bit image data, and that they could steal a bit for some reason.  But it looks like the day of reckoning may have come.
I really don't get this. Todays computers have really fast floating-point units. x86 allows for SIMD operations that can take chunks of 4 (or 8) float values simulatneously and do stuff like multiply-add.

While the throughput of 32-bit float add is likely 1/2 that of 16-bit fixed-point add, fixed-point operations tends to introduce more operations, and certainly slow down the developement/testing. Just as importantly, whenever Intel introduce new SIMD goodies (such as mmx->SSE->AVX) they will typically do floating-point first, then introduce fixed-point one generation later.

-h
Title: Re: 16 bit dslr
Post by: ErikKaffehr on February 02, 2014, 03:08:22 am
Hi,

Well, we see new raw converters using floating point, HDR has some kind of floating point, too. Some people may object to floating point lacking precision, the 1.99999 != 2.00000 syndrome.

Most image processing software is pretty old and probably has a lot of quite obscure code based on binary operations. The best way is probably to start from scratch.

Best regards
Erik



I really don't get this. Todays computers have really fast floating-point units. x86 allows for SIMD operations that can take chunks of 4 (or 8) float values simulatneously and do stuff like multiply-add.

While the throughput of 32-bit float add is likely 1/2 that of 16-bit fixed-point add, fixed-point operations tends to introduce more operations, and certainly slow down the developement/testing. Just as importantly, whenever Intel introduce new SIMD goodies (such as mmx->SSE->AVX) they will typically do floating-point first, then introduce fixed-point one generation later.

-h
Title: 16 bit dslr: higher res. will make 14 bits enough?
Post by: BJL on February 02, 2014, 01:04:59 pm
My guess is that the long-time trend towards having more, smaller photo-sites on sensors of a given size will continue as long as the floor noise levels per photo-site continue to reduce, so that per-pixels SNR will never get beyond about 16,000:1 (2^14), meaning that 14 bits will always be enough. In fact, with full well capacities apparently steady at about 1600 electrons per square micron (according to Roger Clark at http://www.clarkvision.com/articles/does.pixel.size.matter/#Unity_Gain) once pixel sizes get down to about 3.3 microns (bigger than the Sony RX100's 2.4 microns; just a bit smaller than the 16MP 4/3" sensors), the full well capacity will be about 16,000 e- or less, so an ideal 14 bit ADC could count the electrons exactly.

This seems an easier path than having fewer, bigger photo-sites that need 16-bit ADCs, because the electron counting is done with more parallelism: with the same exposure over the same total sensor area, more smaller photo-sites leads to counting the same total number of electrons but in more, smaller bundles, using more column parallel ADCs so that 14-bit rather than 16-bits is enough, which allows faster ADC operation.
Title: Re: 16 bit dslr
Post by: Ajoy Roy on February 03, 2014, 09:47:48 am
We are assuming that as the sensel sizes go down the full well capacity will decrease. That need not be so. If we have 3D sensels charge accumulation layer is relatively deep, they can accommodate a lot more electrons. If the full well capacity increases to a million electrons then we may have 16+ bit data.
Title: Re: 16 bit dslr
Post by: ErikKaffehr on February 03, 2014, 12:29:29 pm
Hi,

We don't need a million, 25000 would be fine ;-)

Best regards
Erik

We are assuming that as the sensel sizes go down the full well capacity will decrease. That need not be so. If we have 3D sensels charge accumulation layer is relatively deep, they can accommodate a lot more electrons. If the full well capacity increases to a million electrons then we may have 16+ bit data.
Title: Re: 16 bit dslr
Post by: BJL on February 03, 2014, 12:35:23 pm
We are assuming that as the sensel sizes go down the full well capacity will decrease.
I am not simply assuming with no basis: I cited evidence a a clear pattern for some years of roughly constant electrons per unit area of photo-site or rough constant well depth. Things might change, but it does seem quite likely that well depth will not increase much.
Title: Re: 16 bit dslr
Post by: Ajoy Roy on February 04, 2014, 08:16:45 am
Similar limitations in normal CMOS IC manufacturing led to vertical (or 3-d) design, hence it may soon percolate to the photo sensor design.