Luminous Landscape Forum
Raw & Post Processing, Printing => Digital Image Processing => Topic started by: David Sutton on January 19, 2010, 09:26:05 pm
-
Hi. I've got a couple of really basic questions about digital images that I wonder if anybody can help me with?
Let me see if I've got this right:
A pixel (from picture element) is the smallest element in an image that can be controlled with photo editing software.
An photo's histogram shows its pixel brightness values, from darkest to lightest. So it's a pixel based histogram.
A raw file's histogram would show the number of photons counted, from some to heaps. It would not be pixel based, as there are no pixels yet because the file hasn't been demosaiced. In the same way as there is no “picture” until a film is developed.
If that's okay so far and I'm not confused, questions:
In a camera's sensor (Bayer array), what is the relationship between the number of photosensor elements and the number of pixels in the demosaiced image? Is it one to one?
If not, I'd prefer not to use the word “pixel” to describe a photon counter. What's an accurate word? Photosite? Photon receptor?
When I open a raw file in Lightroom, it's been demosaiced but not rendered. What am I seeing on screen? Is it a jpeg produced by the software in a similar way to the image on the camera lcd after shooting? So am I seeing a pixel based histogram or something else?
Thanks in advance, David
-
Hi,
See comments below.
A very major issue you don't discuss is white balancing, which decides the balance of the different channels. This is the major issue with in camera histograms. Changing the color balance can shift the RGB histograms significantly.
The demosaic process does not affect color, IMHO, but is important regarding sharpness and aliasing.
Best regards
Erik
Hi. I've got a couple of really basic questions about digital images that I wonder if anybody can help me with?
Let me see if I've got this right:
A pixel (from picture element) is the smallest element in an image that can be controlled with photo editing software.
Yes
An photo's histogram shows its pixel brightness values, from darkest to lightest. So it's a pixel based histogram.
Yes
A raw file's histogram would show the number of photons counted, from some to heaps. It would not be pixel based, as there are no pixels yet because the file hasn't been demosaiced. In the same way as there is no “picture” until a film is developed.
Don't agree. You are right sort of. Demosaicing wouldn't affect histogram significantly. There are three channels RGB and those are for real. It's not photons we measure but numbers, although the numbers relate to photons.
If that's okay so far and I'm not confused, questions:
In a camera's sensor (Bayer array), what is the relationship between the number of photosensor elements and the number of pixels in the demosaiced image? Is it one to one?
Well 50% of the pixels is green, 25% blue and 25% red. Demosaic guesses the missing color information for each pixel. 2/3s of the information on each pixel is interplated.
If not, I'd prefer not to use the word “pixel” to describe a photon counter. What's an accurate word? Photosite? Photon receptor?
Photosite sounds fine to me.
When I open a raw file in Lightroom, it's been demosaiced but not rendered. What am I seeing on screen? Is it a jpeg produced by the software in a similar way to the image on the camera lcd after shooting? So am I seeing a pixel based histogram or something else?
It's a preview, that is rendered. I don't know about histogram.
Thanks in advance, David
-
Hi Erik. Thanks for the reply. Is the following any better?
A photosite measures the light falling on it and this is stored in the raw file as a number. A raw histogram would show the amount of information collected, from little to a lot. It would not be pixel based, as the pixels seen a demosaiced and rendered image haven't been created yet. But it would be close to a histogram of such an image prior to white balancing.
So if I have this right, each photosite corresponds to a pixel seen in the image on the screen on a 1:1 basis, minus maybe a few at the edge. The colour in each pixel is "guessed" from the information stored in the raw file taken from the corresponding photosite and surrounding ones.
I'm trying to keep my thinking clear. If I start to use the word "pixel" when I mean "photosite", then the above sentences becomes nonsense. Assuming they aren't to begin with.
Regards, David
-
A pixel (from picture element) is the smallest element in an image that can be controlled with photo editing software.
An photo's histogram shows its pixel brightness values, from darkest to lightest. So it's a pixel based histogram.
Yes, an histogram counts the number of pixels (Y value) of a given brightness value (X axis).
A raw file's histogram would show the number of photons counted, from some to heaps. It would not be pixel based, as there are no pixels yet because the file hasn't been demosaiced.
It's still very feasable to have a raw-based histogram, because these photosites have both a color and a brightness (the photon count), from which one can build three histograms, one for each color.
In a camera's sensor (Bayer array), what is the relationship between the number of photosensor elements and the number of pixels in the demosaiced image? Is it one to one?
Yes, by default, there is one pixel of the output image per photosite.
There are other ways to reconstruct a picture though, eg some cameras may have a reduced-resolution mode for high sensitivities where one pixel of the rendered image corresponds to a square of 4 photosites.
But it would be close to a histogram of such an image prior to white balancing.
Sort of... Keep in mind that a raw image without white balance does look strange, very green (search this forum or elsewhere for UniWB, which is in a sense similar to no white balance).
It's akin to looking at a color negative, if you see what I mean : very useful to see what margin of adjustment you've got while printing, but not so much to judge the image itself.
When I open a raw file in Lightroom, it's been demosaiced but not rendered.
What you see is a rendered image, with the parameters you specified (ie default parameters after the import). If you can see it (without wondering what that mess is), then it has been rendered. A raw image is not very human sight friendly.
And the histogram you see in Lightroom is based on the rendered image, so that you can see the effect of the develop parameters on it.
-
Hi. I've got a couple of really basic questions about digital images that I wonder if anybody can help me with?
Let me see if I've got this right:
A pixel (from picture element) is the smallest element in an image that can be controlled with photo editing software.
An photo's histogram shows its pixel brightness values, from darkest to lightest. So it's a pixel based histogram.
A raw file's histogram would show the number of photons counted, from some to heaps. It would not be pixel based, as there are no pixels yet because the file hasn't been demosaiced. In the same way as there is no “picture” until a film is developed.
If that's okay so far and I'm not confused, questions:
In a camera's sensor (Bayer array), what is the relationship between the number of photosensor elements and the number of pixels in the demosaiced image? Is it one to one?
If not, I'd prefer not to use the word “pixel” to describe a photon counter. What's an accurate word? Photosite? Photon receptor?
When I open a raw file in Lightroom, it's been demosaiced but not rendered. What am I seeing on screen? Is it a jpeg produced by the software in a similar way to the image on the camera lcd after shooting? So am I seeing a pixel based histogram or something else?
Thanks in advance, David
Because of the complications you note with a Bayer array sensor, some authors use the term SENSEL to describe the individual elements in a Bayer array. A 12 MB Bayer sensor contains 6M green sensels, 3M blue sensels and 3M red sensels. The demosaiced image would have 12MB pixels and file size would be 36MB since there are 3 color channels.
-
Thank you for your replies. It's most helpful. Perhaps I need to explain in a bit more detail. When I teach music I can draw on some 200 years of tradition. Teachers on my instrument have put in a lot of work over those year into what works didactically. When I say “this is a string and here are some ways to get a nice sound from it” I know what I'm doing. But with digital photography we seem have a lot of people floundering around, and much of the information is only “sort of” right.
For example, I don't usually use the term “raw image” as I doubt there is such a thing. If I can't see it and no one else can, I'd prefer to use the term “raw file”. The data is real, but the “pixels” aren't. My understanding is that most of us will not be able to find software to see the undemosaiced raw file. And my guess is that what I'm seeing on screen has also been rendered. Meaning I suppose converted to a jpeg or whatever and into a common colour space. In Lightroom the raw file will be in a form of prophoto rgb, but what am I seeing on screen? Is it a jpeg generated by the software to represent the file? I realised that I'm not even sure what I'm seeing when I look at an image on screen. It's like going to a concert of experimental music and not being told what I'm going to hear.
Most people with a digital camera are using it's software and hardware to convert the raw file into a jpeg, and are unaware that there is that intermediate step to produce their photos. When they join a camera society, it's an uphill battle to to get them to work with their raw files, and a lack of an accurate but simple way of describing what's happening doesn't help.
So I'm looking for descriptions of the fundamental processes in digital photography that are in simple words, and where I don't have to come back later and say “well that was only half right”
I may not necessarily want to teach this stuff, but I would like to accurately describe what I'm doing.
My beginning step has been to treat the raw data as information, and not as anything concrete. And then to call it an “image” or “photo” once it can be shared, meaning put into a common file format. I'm going to have to think some more about this.
As far as histograms go, what am I looking for in the lcd histogram on the camera.? I want to see how far to the right it goes to know something about the signal to noise ratio, and about highlight clipping. And I want to see how far to the left it goes to know something about my shadow clipping. I have a UniWb saved as a custom setting, and I can show someone what it looks like, but seldom use it as for most of what I do an approximation is good enough. Blue fungi are the exception. They are complete little b*st*rds as far as clipping in the blue channel goes.
Cheers, David
-
David, I agree a lot with what you say, "RAW file" is certainly better than "RAW image" for most cases.
I think you can learn a lot downloading and playing with RawAnalyze.
http://www.cryptobola.com/PhotoBola/Rawnalyze.htm (http://www.cryptobola.com/PhotoBola/Rawnalyze.htm)
Cheers,
Luigi
-
Because of the complications you note with a Bayer array sensor, some authors use the term SENSEL to describe the individual elements in a Bayer array. A 12 MB Bayer sensor contains 6M green sensels, 3M blue sensels and 3M red sensels. The demosaiced image would have 12MB pixels and file size would be 36MB since there are 3 color channels.
I didn't know that. It seems to me that sensel is an unnecessary neologism when we already have photosite and photo receptor and so on. What do you think?
-
David, I agree a lot with what you say, "RAW file" is certainly better than "RAW image" for most cases.
I think you can learn a lot downloading and playing with RawAnalyze.
http://www.cryptobola.com/PhotoBola/Rawnalyze.htm (http://www.cryptobola.com/PhotoBola/Rawnalyze.htm)
Cheers,
Luigi
Thanks for reminding me about RawAnalyse. I have it on my computer but have never played with it.
-
Because of the complications you note with a Bayer array sensor, some authors use the term SENSEL to describe the individual elements in a Bayer array. A 12 MB Bayer sensor contains 6M green sensels, 3M blue sensels and 3M red sensels. The demosaiced image would have 12MB pixels and file size would be 36MB since there are 3 color channels.
That's right. I also prefer to use the term Sensel (from SENSor ELement) for the smallest spatially discrete units (representing the capture of either a single color pass band (e.g. Bayer CFA filtered or achromatic), or a stacked construction representing multiple color pass bands (e.g. Foveon)). The 'spatially discrete' part is important because it determines the sampling density, and thus the resolution (in terms of samples per unit distance). The photo-sensitive area of the complete imager chip is best described as sensor array, to differentiate from the individual sensor elements. Using the term sensel also avoids confusion with sloppy use of sensor (array). Sensel also hints at the photovoltaic sensitivity of the electronic circuit.
To avoid confusion with output pixels (the smallest spatial units that make up an image), I try to avoid the use of that term for the sensor elements, which usually require further processing of their data content before they can be reproduced as a color. Sensels are input, pixels are output (as in pixels per inch (PPI)). Sensels can also be electronically combined (AKA binning) before they are output as pixels, so there can be a difference in the number of sensels versus pixels.
Cheers,
Bart
-
The demosaiced image would have 12MB pixels and file size would be 36MB since there are 3 color channels.
Minor clarification: I think you meant "12 megapixels" instead of "12 MB pixels". Also, it may be useful to point out that the file size would only be three times the number of megapixels when using 8 bits per pixel uncompressed. Other bit depths (and/or compression) change the relationship.
-
David,
Asking questions on a web forum is fine, but to augment your understanding, there is of course a ton of material on the internet and in hard copy covering every aspect of the questions you are asking; to start with, the many excellent - and free - tutorials and "understanding" articles on this website, the Reichmann-Schewe videos (for purchase) on Camera Raw and Lightroom, another website called Cambridgeincolour, Jeff Schewe's book on Camera Raw, and on and on. The best way to approach this is to avoid hair-splitting over terminology except where it really matters, and focus on researching the fundamental concepts. That will help you to improve the guidance you give to your camera club colleagues.
Mark
-
Sensels are input, pixels are output
Cool
-
David,
Asking questions on a web forum is fine, but to augment your understanding, there is of course a ton of material on the internet and in hard copy covering every aspect of the questions you are asking; to start with, the many excellent - and free - tutorials and "understanding" articles on this website, the Reichmann-Schewe videos (for purchase) on Camera Raw and Lightroom, another website called Cambridgeincolour, Jeff Schewe's book on Camera Raw, and on and on. The best way to approach this is to avoid hair-splitting over terminology except where it really matters, and focus on researching the fundamental concepts. That will help you to improve the guidance you give to your camera club colleagues.
Mark
Hello Mark. Sometimes I'm a bit slow off the starting line but one thing I did get as soon as I picked up a digital camera was that we had entered a whole new world. Most photographers I meet haven't grasped this at a deep level. I see the camera more and more as a tool for working with information. Looking at my best prints I can see I have visualised the image and created it in Photoshop from information gathered in the field in the form of ones and zeros. Even Photoshop is a bit stuck in the past. (Ha Ha I just typed pasty). For example, dodge and burn. That's what I used to do forty years ago in a darkroom. What's it doing here? Thinking in terms of dodging and burning can limit our creativity and the possibilities of digital technology.
Most of the time I think our use of the camera is like using a Ferrari to do the shopping.
The resources you mention are good, and I have worked through them all with the exception of Jeff Schewe's new book. It's on the “to do” list. If you look at the two questions I asked, I don't believe they are addressed in any of this material. Though I could have easily missed it. For me this discussion is not hair splitting but lies at the heart of developing a photographic vocabulary for the 21st century, and avoiding getting mired in old thinking. I am not trying to tell others how to think, nor am I interested in evangelising. But if my thinking is not clear how can I sharpen my skills?
David
-
The world is new and the world is old. Some things change fundamentally, others marginally and others not at all. Dodging and burning remains as valid in digital image making as it is in the chemical darkroom. In fact no technique is invalid as long as it deliver the results you are looking for and doesn't desroy the planet . And Photoshop is not at all stuck in the past. It is going into version CS5 and the people involved in developing wonderful new image editing tools certainly don't see themselves as mired in the past. They are working on the frontier of mathematics and programming techniques to bring us new and more efficient ways of doing the things we wish to do with our images.
I do believe the questions you are asking are adequately addressed in existing references for practical purposes, but I wasn't intending by that to throw any cold water on the discussion. I just see it wandering into semantics that are not central to a basic understanding of the fundamentals which matter to getting optimum results from a digital imaging workflow. But of course it should just carry on as people wish - it's a free world - at least here.
I wish you well in your research.
-
Minor clarification: I think you meant "12 megapixels" instead of "12 MB pixels". Also, it may be useful to point out that the file size would only be three times the number of megapixels when using 8 bits per pixel uncompressed. Other bit depths (and/or compression) change the relationship.
Correct. Thanks!
-
David, it seems you are interested in the inner workings of a digital camera. I strongly recommend you to learn using DCRAW (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm), a command line RAW developer by David Coffin (http://cybercom.net/~dcoffin/dcraw/) that will allow you to do and learn things no other RAW developer will.
With DCRAW you will be able to:
- Extract the embedded JPEG files found in the RAW file
- Extract and visualize the RAW data in the form of a Bayer pattern (http://www.guillermoluijk.com/article/virtualraw/bayer.gif)
- Get rid of all those clandestine transformations applied in commercial RAW developers (exposure and ISO correction, noise reduction, sharpening, hot pixels elimination,...)
- Plot true RAW histograms (http://www.guillermoluijk.com/tutorial/dcraw/histsat.gif)
- Substract dark frames in the RAW domain
- Learn and control one by one all the steps involved in RAW development:
* black and saturation points of RAW files
* white balance in terms of its genuine linear implementation (forget about Temp/Tint models)
* demosaicing algorithms
* highlight strategies for neutral clipped areas
* colour profile conversions
Even the source code (http://cybercom.net/~dcoffin/dcraw/dcraw.c) is available in case you want to learn some of the steps in depth from an implementation point of view.
Regards
-
David, it seems you are interested in the inner workings of a digital camera. I strongly recommend you to learn using DCRAW (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm), a command line RAW developer by David Coffin (http://cybercom.net/~dcoffin/dcraw/) that will allow you to do and learn things no other RAW developer will.
DCRaw is excellent, but it does have a command line interface which is inconvenient to many. Iris (http://www.astrosurf.com/buil/us/iris/iris.htm) is a freeware program with many of the same features. It has not been updated for over a year and I hope Christian has not abandoned the project.
-
Hi. I've got a couple of really basic questions about digital images that I wonder if anybody can help me with?
Let me see if I've got this right:
A pixel (from picture element) is the smallest element in an image that can be controlled with photo editing software.
An photo's histogram shows its pixel brightness values, from darkest to lightest. So it's a pixel based histogram.
A raw file's histogram would show the number of photons counted, from some to heaps. It would not be pixel based, as there are no pixels yet because the file hasn't been demosaiced. In the same way as there is no “picture” until a film is developed.
If that's okay so far and I'm not confused, questions:
In a camera's sensor (Bayer array), what is the relationship between the number of photosensor elements and the number of pixels in the demosaiced image? Is it one to one?
If not, I'd prefer not to use the word “pixel” to describe a photon counter. What's an accurate word? Photosite? Photon receptor?
When I open a raw file in Lightroom, it's been demosaiced but not rendered. What am I seeing on screen? Is it a jpeg produced by the software in a similar way to the image on the camera lcd after shooting? So am I seeing a pixel based histogram or something else?
Thanks in advance, David
Hi David,
given the many responses to have had here, how have your view on the topic at hand change if at all. Are you able to define the elements on a sensor that captures the light and their relation to how they are presented in an image program/ raw converter?
When you look at a histogram - pixel, photo-sensor - all its telling you is the relation of the pixels in terms of density within 0-255 (8bit) or the equivalent in 16bit (RGB histogram gives you information of how each color is distributed and if clipping has occurred which colors are clipped - which in turn can help you address the issue or live with it. So, if you look at a histogram and seeing something else, then please tell, cos I would love to know.
Please enlighten me, I am just as curious as you
thanks
Henrik
-
DCRaw is excellent, but it does have a command line interface which is inconvenient to many. Iris (http://www.astrosurf.com/buil/us/iris/iris.htm) is a freeware program with many of the same features. It has not been updated for over a year and I hope Christian has not abandoned the project.
Thanks for this link, and thanks too Guillermo for mentioning DCRaw. I'm travelling at present with a friend who uses UFRaw which is a front end for DCRaw. But I don't thing it has all DCRaw's functionality. Although I have been avoiding command line, I see no reason I can't learn learn the syntax in the manual pages in DCRaw.
Regards
-
Hi David,
given the many responses to have had here, how have your view on the topic at hand change if at all. Are you able to define the elements on a sensor that captures the light and their relation to how they are presented in an image program/ raw converter?
When you look at a histogram - pixel, photo-sensor - all its telling you is the relation of the pixels in terms of density within 0-255 (8bit) or the equivalent in 16bit (RGB histogram gives you information of how each color is distributed and if clipping has occurred which colors are clipped - which in turn can help you address the issue or live with it. So, if you look at a histogram and seeing something else, then please tell, cos I would love to know.
Please enlighten me, I am just as curious as you
thanks
Henrik
Hi Henrik. I'm currently doing one last photo trip before the end of my summer holidays. At the end of each day I've been plugging the vodem into my laptop and reading and replying to these posts. Some technology is both cheap and really cool.
While on the road (or water) I've been discussing the responses with my travelling companion and I will be very interested to see the resulting images on a large screen when I get home tomorrow, as my ideas about image making shifted today. I'll post then.
Best wishes, David
-
Hi David,
given the many responses to have had here, how have your view on the topic at hand change if at all. Are you able to define the elements on a sensor that captures the light and their relation to how they are presented in an image program/ raw converter?
When you look at a histogram - pixel, photo-sensor - all its telling you is the relation of the pixels in terms of density within 0-255 (8bit) or the equivalent in 16bit (RGB histogram gives you information of how each color is distributed and if clipping has occurred which colors are clipped - which in turn can help you address the issue or live with it. So, if you look at a histogram and seeing something else, then please tell, cos I would love to know.
Please enlighten me, I am just as curious as you
thanks
Henrik
Hi Henrik. Wow, your questions put me on the spot a little, which is probably a good thing, so here is my attempt at an answer. What I have learned is not new for a lot of people, but photography is such a personal thing that what I do with this information may result in something quite different from someone else's approach. The images we make say a lot about the us and often reveal as much of the photographer as the photographed. I use a camera for my own pleasure and as as a means of exercising my imagination and skills, and after printing they are usually shown once and then put in a drawer.
Okay, first the larger picture. I'm even more convinced the words we use to describe what we are doing are powerful tools that limit or expand our horizons. Thinking about this while driving to the next shoot, and how I choose to think about image capture, I also realised that what I choose to remember in my life will define what sort of life I have. Good and bad things happen and we learn from both, but if I have a life filled with happy memories it will be because I have chosen my attitude to external events and chosen what to recall.
Good, that's out of the way. Next, this discussion has defined the difference for me between photos and images. A photo for me is what happens when you are walking along with a camera and think “That looks interesting”. Click. When I look at the histogram I am treating it as a measure of exposure and leave it at that. Images are what happens when I have a mental picture and I want to turn it into a print (“what I saw” versus “what I see” I guess). I go looking for source material (input) to turn into pixels. When I look at the histogram I try to see what parts of it correspond to what is in front of me. In the bits of interest what is the signal to noise ratio? What has clipped in the shadows and highlights and does it matter? Knowing the histogram is of a jpeg generated from the raw file, I want to know how close that is to “reality”. On my camera, if I set the colour space to Adobe rgb I get a fair idea of where the highlights will clip and the information in that channel will be lost. If I set it to srgb it's better for showing shadow clipping. And there is uniwb as a custom setting if I need it.
I've tried to find some examples and how I think about them now. Here is a photo of a duck:
[attachment=19711:186NoTextVFAPRel.jpg]
I wanted to show the determination in this little fellow, so I cropped wide and sharpened the water to show him pushing against something strong, and sharpened his eyes to show he wasn't fazed by that. Printed on Epson Velvet Fine Art paper to give the water more substance and texture. Not much else really.
Here is an image of a lighthouse:
[attachment=19712:_MG_6838...nd_uprez.jpg]
Lighthouses really interest me. One of the first photos I took over 50 years ago when I was about 6 years old was a lighthouse. I still have the negative. I am slowly doing a series and want to show in the final prints something of the reason they have ended up as such strong symbols. I wanted to show one in relation to the size of the surrounding ocean, but I haven't got any vast ocean bits so here cropped to show a big sky instead. It was taken in the Western Isles off Scotland, which in my imagination is a place of mystery. I wanted to look at it and ask if I really went there or dreamed it, so to get that look I stripped out a lot of the information from the image by heavy cropping and printing large on Velvet Fine Art paper so the sky looked like it had been brushed on with watercolour. There was a little sharpening on the lighthouse to give it more reality. Looking at the histogram, I wanted to shift it to the right to maximise my signal to noise ratio, so when I was left with just a little amount of information in the sky, it wouldn't be mainly splodgy noise. I didn't care how much of the histogram showed clipping, as long as it wasn't in the bit of the image I wanted to use. Some guesswork here so I did take a few shots to be sure.
Finally here are some images of sheep with trees taken from a series motivated by my dislike of how the female form is Photoshopped in fashion photographs:
[attachment=19713:_MG_2698Border.jpg][attachment=19714:_MG_3345...elBorder.jpg]
Looking at the histograms I wasn't too worried about shadow noise as I was going to send a lot of the print to black and would probably end up adding noise anyway. I was most concerned by the bits of sky and where the right side of the histogram was sitting. I hate having bits of sky in an image as they can go to white in a print and it shows, but I wanted to have information in the wool.
I find symmetry in an image disturbing and often repellent and I don't like it, though I guess most people don't agree. So if I want to unsettle myself I put some symmetrical bits in a print. I think sheep and these trees are really spooky, so I made the whole thing almost symmetrical and painted in light and shadow. In the colour print I painted in some desaturation as well. They were printed on Harman gloss, which I use when I want to make a print look “hyper real”.
What I want to know now is how much does shifting the histogram to the right during image capture affect the signal to noise ratio on my camera, and how much extra information have I recorded on my camera? I intend to shoot a scene with the histogram just short of clipping, then repeat with it touching the three quarter mark and then touching the half way mark, equalise the exposure and then do some aggressive editing in Photoshop to see which version falls apart first and see if it is visible in print. I want to know not only what works with equipment, but also at what point is it likely to fail. I think I'll need to take the above advice and learn DCRaw and rawanalyse, so this is a project for later in the year.
David
-
What I want to know now is how much does shifting the histogram to the right during image capture affect the signal to noise ratio on my camera, and how much extra information have I recorded on my camera?
It depends on the tonal level you are looking at. For example, say we're looking at a tonal level that is 3 stops below saturation in the first raw file at ISO 100. If you increase exposure by 2 stops, that tonal level will now be 1 stop below saturation, and the SNR will have doubled. (A one stop increase in exposure increases SNR by 41.1%.) If you leave exposure alone and increase ISO by 2 stops (ISO 400), the tonal level will now be 1 stop below saturation, but there is no change whatsoever in SNR. Both of these facts are because the only noise in that tonal level is photon shot noise.
As another example, say we're looking at a tonal level that is 8 stops below saturation at ISO 1600. Here the total noise power will be dominated by read noise instead of photon shot noise. If you increase exposure by 2 stops, the tonal level will now be 6 stops below saturation, and the SNR will have improved a factor of four (almost). If you increased exposure by two stops and decreased ISO by the same amount, then the tonal level and histogram would remain the same, but the SNR would still improve; but this time it would be a bit less improvement with certain cameras, because some CMOS sensors with analog gain have much less read noise (relative to any given tonal level) at higher ISO.
Hope that helps.
-
What I want to know now is how much does shifting the histogram to the right during image capture affect the signal to noise ratio on my camera, and how much extra information have I recorded on my camera? I intend to shoot a scene with the histogram just short of clipping, then repeat with it touching the three quarter mark and then touching the half way mark, equalise the exposure and then do some aggressive editing in Photoshop to see which version falls apart first and see if it is visible in print. I want to know not only what works with equipment, but also at what point is it likely to fail. I think I'll need to take the above advice and learn DCRaw and rawanalyse, so this is a project for later in the year.
As far as shot noise goes (and shot noise predominates in all but the extreme shadows, where read noise predominates), it is the number of photons captured and not the appearance of the histogram that counts. If the histogram is to the left and you move it to the right by increasing ISO, you have not collected any more photons and the shot noise will be the same. However, if you can't increase actual exposure because of shutter speed or depth of field considerations, then upping the ISO will decrease read noise up to a certain ISO, at which point there are diminishing returns. Beyond that point, you merely decrease highlight headroom by increasing ISO as Emil Martinec (http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#ETTR) explains in detail. That point of diminishing returns varies with the camera. The the Nikon D3 it is at about ISO 800, but with the D3x, it occurs at about twice base ISO.
If you don't mind having a dark preview on the LCD in this case, it is better to increase "exposure" in the raw converter and avoid blowing the highlights.
-
It depends on the tonal level you are looking at. For example, say we're looking at a tonal level that is 3 stops below saturation in the first raw file at ISO 100. If you increase exposure by 2 stops, that tonal level will now be 1 stop below saturation, and the SNR will have doubled. (A one stop increase in exposure increases SNR by 41.1%.) If you leave exposure alone and increase ISO by 2 stops (ISO 400), the tonal level will now be 1 stop below saturation, but there is no change whatsoever in SNR. Both of these facts are because the only noise in that tonal level is photon shot noise.
As another example, say we're looking at a tonal level that is 8 stops below saturation at ISO 1600. Here the total noise power will be dominated by read noise instead of photon shot noise. If you increase exposure by 2 stops, the tonal level will now be 6 stops below saturation, and the SNR will have improved a factor of four (almost). If you increased exposure by two stops and decreased ISO by the same amount, then the tonal level and histogram would remain the same, but the SNR would still improve; but this time it would be a bit less improvement with certain cameras, because some CMOS sensors with analog gain have much less read noise (relative to any given tonal level) at higher ISO.
Hope that helps.
Yes it does help. I understand the signal to noise ratio is proportional to the signal power (S/N α P if I remember my maths). Most noise I'm used to seeing is read noise. I think I've just learned that shot noise is caused by the quantisation of light, and thus random variations in the number of photos arriving at the receptor at any given moment. And the shot noise is also inversely proportion to the signal frequency (S/N α 1/f). As the frequency goes up, the photon energy increases and the number of photons arriving at any moment for that frequency decreases. So the shot noise will increase as the histogram is moved to the right. I think I've got that right. I don't recall seeing it in print. Just had a look at a blue sky from a raw file at about 3:1 I can clearly see noise in there around the ¾ tones. That must be it.
Edit: bjanes you post got in ahead of me. Thanks for the clarification